On the best constant in Marcinkiewicz–Zygmund inequality

On the best constant in Marcinkiewicz–Zygmund inequality

Statistics & Probability Letters 53 (2001) 227 – 233 On the best constant in Marcinkiewicz–Zygmund inequality Yao-Feng Rena; ∗ , Han-Ying Liangb a De...

97KB Sizes 9 Downloads 60 Views

Statistics & Probability Letters 53 (2001) 227 – 233

On the best constant in Marcinkiewicz–Zygmund inequality Yao-Feng Rena; ∗ , Han-Ying Liangb a Department

of Mathematics, The Naval Academy of Engineering of China, Wuhan 430033, People’s Republic of China of Applied Mathematics, Tongji University, Shanghai 200092, People’s Republic of China

b Department

Received March 1999

Abstract Let {Xn ; n ¿ 1} be a sequence of independent r.v.’s with EXn = 0; C(p) be the best constant in the following Marcinkiewicz–Zygmund inequality:  p n n      E Xk  6 C(p)np=2−1 E|Xk |p ; p ¿ 2:   k=1

k=1

In this paper we prove that [C(p)]1=p grows like Published by Elsevier Science B.V.



√ c 2001 p as p → ∞ and give an estimate C(p) 6 (3 2)p pp=2 . 

1. Introduction Let ( ; F; P) be a complete probability space, {Xn ; n ¿ 1} be a sequence of independent random variables with zero means. Recently, a lot of attention has been given to the best constants or the growth rate of the constants appearing in various moment inequalities. Johnson et al. (1985) showed that in Rosenthal inequality      n 1=p   n n  p 1=p      E : (1) Xk  6 A(p) max E|Xk |2 ; E|Xk |p     k=1

k=1

k=1

A(p) grows like p=ln p as p → ∞. Burkholder (1988) proved that for the following Marcinkiewicz–Zygmund inequality:  p p=2  n n      2 E Xk  6 B(p)E Xk ; p ¿ 1: (2)   k=1

k=1



Corresponding author. E-mail address: [email protected] (H.-Y. Liang).

c 2001 Published by Elsevier Science B.V. 0167-7152/01/$ - see front matter  PII: S 0 1 6 7 - 7 1 5 2 ( 0 1 ) 0 0 0 1 5 - 3

228

Y.-F. Ren, H.-Y. Liang / Statistics & Probability Letters 53 (2001) 227 – 233

[B(p)]1=p =p−1 is the best possible for p ¿ 2. As a consequence of (2) we can write another Marcinkiewicz– Zygmund inequality  p n n      E Xk  6 C(p)np=2−1 E|Xk |p ; p ¿ 2 (3)   k=1

k=1

and if we denote C(p) as the best constant in (3), then [C(p)]1=p 6 p − 1 by the result of Burkholder. Inequality (3) is well-known inequality with wide applications in probability and statistics. Some authors have studied the constant C(p). Dharmadhikari et al. (1968) showed that C(p) 6 [8(p − 1) max(1; 2p−3 )]p . Later Dharmadhikari and Jogdeo (1969) gave a much smaller estimate C(p) 6 (p=2)(p − 1) max(1; 2p−3 )[1 + −1 (p−2)=2m 2p ], where the integer m satisEes the condition 2m 6 p ¡ 2m + 2 and K2m is deEned by K2m =

m K2m 2m−1 =(r − 1)!. This is a good estimate for not very large p, but as the growth rate of K2m as p → ∞ r=1 r is also about pp , we again obtained [C(p)]1=p = o(p). √ In this present paper we shall prove that the growth rate of [C(p)]1=p is p and give an estimate C(p) 6 √ p p=2 (3 2) p . Our method is to establish a martingale inequality and construct an example. Of course our estimate is not the best constant exactly, but it provides explicit information and it is sharp for large p. 2. Martingale inequality Let ( ; F; F = (Ft )t¿0 ; P) be a Eltered probability space with Eltration F = (Ft )t¿0 satisfying the usual 2 the collection of all locally square integrable martingales based on ( ; F; conditions. Denote by Mloc 2 F =(Ft )t¿0 ; P). For M ∈ Mloc ; [M ] is the quadratic variation of M; M is the predictable quadratic variation ∗ of M; M c is the continuous part of M and M d is the jump part of M . M∞ = supt¿0 |Ms |. Related concepts and further its properties, one may refer to He et al. (1992). For discrete-time martingale {fn ; Fn ; n ¿ 1}, deEne ∗ f∞ = sup|fn |; n¿1



s(f) =

∞ 

dn = fn − fn−1 ; 1=2

d2k

 ;

(f) =

k=1

∞ 

1=2 E(d2k |Fk−1 )

:

k=1

2 ; M0 = 0, then for every  ¿ 0; k ¿ 0 and A ∈ F0 Lemma 1. Suppose that M ∈ Mloc  2  ∗ P(A): P(M∞ ¿ ; [M ]∞ + M d ∞ ¡ k; A) 6 2 exp − 2k

Proof. Write  Ct = [(KMs )+ ]2 ; s6t

Ht = M c t + Ct +

Dt =

 s6t

(4)

2 [(KM )− s ] ;

Dt(p) ;

where (Dt(p) ) is the dual predictable projection of (Dt ). By Proposition (4.2.1) of Barlow et al. (1986), Z = (Zt )t¿0 = (exp{Mt − 12 Ht })t¿0 is a supermartingale. For any u ¿ 0; A ∈ F0 , denote by Ztu IA = exp{uMt − (u2 =2)Ht }IA . Then Z u IA = (Ztu IA )t¿0 is also a supermartingale. By supermartingale inequality     d u u−ku2 =2 P supMs ¿ ; [M ]∞ + M ∞ ¡ k; A 6 P supZt IA ¿ e t¿0

t¿0

6 exp(−u + ku2 =2)P(A):

Y.-F. Ren, H.-Y. Liang / Statistics & Probability Letters 53 (2001) 227 – 233

229

Taking u = =k, we have   d P supMt ¿ ; [M ]∞ + M ∞ ¡ k; A 6 exp(−2 =2k)P(A) t¿0

and by the same argument applied to −M , we get ∗ P(M∞ ¿ ; [M ]∞ + M d ∞ ¡ k; A) 6 2 exp(−2 =2k)P(A): 2 Lemma 2. Suppose that M ∈ Mloc ; M0 =0. Then for all " ¿ 0; # ¿ 1+" and  ¿ 0; the following inequality holds   (# − 1 − ")2 ∗ ∗ P(M∞ P(M∞ ¿ #; [M ]∞ + M d ∞ ¡ "2 2 ) 6 2 exp − ¿ ): (5) 2"2

Proof. For each Exed  ¿ 0, we deEne stopping time $ by $ = inf {t ¿ 0; Mt∗ ¿ } and denote Gt = F$+t ; M˜ t = (M$+t − M$ )1($ ¡ ∞). Then (M˜ t ; Gt )t¿0 is locally square integrable martingale with d [M˜ ]∞ + M˜ ∞ 6 ([M ]∞ + M d ∞ )1($ ¡ ∞); ∗ ∗ − M$∗ ; M˜ ∞ ¿ M∞

($ ¡ ∞) ∈ G0 :

By Lemma 1 ∗ P(M∞ ¿ #; [M ]∞ + M d ∞ ¡ "2 2 ) ∗ − M$∗ ¿ (# − 1 − "); [M ]∞ + M d ∞ ¡ "2 2 ; $ ¡ ∞) 6 P(M∞ ∗

d

6 P(M˜ ∞ ¿ (# − 1 − "); [M˜ ]∞ + M˜ ∞ ¡ "2 2 ; $ ¡ ∞)   (# − 1 − ") 6 2 exp − P($ ¡ ∞) 2"2   (# − 1 − ")2 ∗ =2 exp − P(M∞ ¿ ): 2"2 2 Theorem 1. Suppose that M ∈ Mloc ; M0 = 0. Then for p ¿ 1 ∗ p E(M∞ ) 6 3p pp=2 E([M ]∞ + M d ∞ )p=2 :

Proof. By the routine approach, from (5) we can get that for all 0 ¡ " ¡ # − 1 −1  p   # (# − 1 − ")2 ∗ p E(M∞ ) 6 1 − 2#p exp − E([M ]∞ + M d ∞ )p=2 : 2"2 " Now, we carefully choose " and # to prove (6). For p ¿ 9, taking # = A ¿ e, √ −1 2 ln A √ √ "= (1 + 1= p) p ; A−1

(6)

(7)

230

Y.-F. Ren, H.-Y. Liang / Statistics & Probability Letters 53 (2001) 227 – 233

we have (# − 1 − ")2 1 = 2"2 2



2

#−1 −1 "

√ √ √ = 12 [ 2 ln A(1 + 1= p) p − 1]2 √ √ √ = p ln A + (2 ln A − 2 ln A) p + (ln A − 2 ln A + 12 ):

Hence,

 (# − 1 − ")2 1 − 2# exp − 2"2 √ √ √ =1 − exp{−(2 ln A − 2 ln A) p − (ln A − 2 ln A − ln 2 + 12 )}: 

p

Taking A = 3:51, from computation we get   (# − 1 − ")2 p ¿ 1 − e−2:25 : 1 − 2# exp − 2"2 Therefore,  √ p −1  p   # (# − 1 − ")2 √ √ p −2:25 −1 A 2 ln A (1 + 1= p) p 1 − 2# exp 6 [1 − e ] 2"2 " A−1 √ √ 6 1:118[2:218(1 + 1= p) p]p 6 3p pp=2 : For p 6 9, by the inequality of Burkholder (1988) ∗ p E(M∞ ) 6 pp E[M ]p=2 ∞

6 pp E([M ]∞ + M d ∞ )p=2 6 3p pp=2 E([M ]∞ + M d )p=2 : Hence (6) is true. If M is continuous, from (6) we get Davis inequality (cf: Davis (1968)) with the constant a little large but having the same growth rate; hence, inequality (6) is an adequate extension of Davis inequality. Note that every discrete-time martingale {fn ; Fn ; n ¿ 1} can be embedded into continuous time as a step type martingale Mt = f[t] ; t ¿ 0, with respect to the Eltration Gt = F[t] ; t ¿ 0, and [M ]t =

[t]  k=1 [t]

M t =



|dk |2 ;

t ¿ 0;

E[d2k |Fk−1 ];

t ¿ 0:

k=1

We have the following corollary: Corollary 1. Suppose that {fn ; Fn ; n ¿ 1} be a square integrable martingale. Then ∗ p E|f∞ | 6 3p pp=2 E[s2 (f) + 2 (f)]p=2 ;

p¿1

(8)

Y.-F. Ren, H.-Y. Liang / Statistics & Probability Letters 53 (2001) 227 – 233

231

3. Marcinkiewicz--Zygmund inequality Theorem 2. Let {Xn ; n ¿ 1} be a sequence of independent random variables with EXn = 0; E|Xn |p ¡ ∞. C(p) being the best constant in the following Marcinkiewicz–Zygmund inequality:  p n n      E Xk  6 C(p)np=2−1 E|Xk |p ; p ¿ 2:   k=1

Then

k=1

(i)

√ C(p) 6 (3 2)p pp=2

(9)

(ii)

√ [C(p)]1=p = O( p)

(10)

Proof. (i) For any integer n ¿ 0, deEne fn =

n 

Xk ;

Fn = (X1 ; : : : ; Xn );

k=1

then {fn ; Fk ; 1 6 k 6 n} is a square integrable martingale with  n 1=2 1=2  n   2 2 s(f) = Xk ; (f) = EXk : k=1

k=1

By Cr -inequality E[s2 (f)]p=2 6 np=2−1

n 

E|Xk |p ;

(p ¿ 2):

k=1

From the proof of Theorem 3:20 of Petrov (1975), we have [2 (f)]p=2 6 np=2−1

n 

E|Xk |p ;

(p ¿ 2):

k=1

Hence by Corollary 1 we obtain  p n     E Xk  6 3p pp=2 E[s2 (f) + 2 (f)]p=2   k=1

6 3p pp=2 2p=2−1 (E[s2 (f)]p=2 + E[2 (f)]p=2 ) √

p

6 (3 2) p

p=2 p=2−1

n

n 

E|Xk |p ;

(p ¿ 2):

k=1

√ (ii) For any function h(p) ¿ 0, so that limp→∞ h(p)= p = 0, we will prove that there exists no constant D, such that     n 1=p  n  p 1=p    p=2−1 p E Xk  6 Dh(p) n E|Xk |   k=1

k=1

for all p ¿ 2 and all independent random variables.

232

Y.-F. Ren, H.-Y. Liang / Statistics & Probability Letters 53 (2001) 227 – 233

√ We can assume 2h(p) ¡ p for p large enough. Let {Xn ; n ¿ 1} be a sequence of independent identically distributed random variables with √ P(Xn = 1) = P(Xn = −1) = h(p)= p; √ P(Xn = 0) = 1 − 2h(p)= p: Then n  √ √ E|Sn |p ¿ np [P(Sn = n) + P(Sn = −n)] = 2np (h(p)= p)n ; E|Xk |p = 2n(h(p)= p); k=1

where Sn = X1 + · · · + Xn . If there exists a constant D such that n  E|Sn |p 6 Dp [h(p)]p np=2−1 E|Xk |p : k=1

Then

Put

√ √ 2np (h(p)= p)n 6 Dp [h(p)]p np=2−1 2n[h(p)= p] √ √ D ¿ n[h(p)= p](n−1)=p [h(p)]−1 : √ g(h) = h(p) ln [ p=h(p)];

we have lim h(p)=g(p) = 0;

p→∞

√ lim g(p)= p = 0:

k→∞

Choosing n so that n − 1 6 g2 (p) ¡ n; we get

√ 2 D ¿ [g(p)=h(p)][h(p)= p]g (p)=p :

Put

√ 2 F(p) = [h(p)= p]g (p)=p :

Since g2 (p) √ ln [h(p)= p] p→∞ p √  p h2 (p) 3 = 0: = − lim ln p→∞ p h(p)

lim ln F(p) = lim

p→∞

We have

√ 2 lim [h(p)= p]g (p)=p = 1:

p→∞

Let p → ∞ in (11) we get a contradiction since D is a constant and lim g(p)=h(p) = ∞:

p→∞

(11)

Y.-F. Ren, H.-Y. Liang / Statistics & Probability Letters 53 (2001) 227 – 233

233

Acknowledgements The authors would like to thank Prof. Hu Dihe for his advice and encouragement. References Barlow, M.T., Jacka, S.D., Yor, M., 1986. Inequalities for a pair of processes stopped at a random time. Proc. London Math. Soc. 52, 142–172. Burkholder, D.L., 1988. Sharp inequalities for martingales and stochastic integrals. Asterisque 157–158, 75–94. Davis, B., 1968. On the Lp -norms of stochastic integrals and other martingales. Duke Math. J. 43, 697–704. Dharmadhikari, S.W., Fabian, V., Jogdeo, K., 1968. Bounds on the moments of martingales. Ann. Math. Statist. 39, 1719–1723. Dharmadhikari, S.W., Jogdeo, K., 1969. Bounds on moments of sums of random variables. Ann. Math. Statist. 40, 1506–1509. He, S.W., Wang, J.G., Yan, J.A., 1992. Semimartingale Theory and Stochastic Calculus. Science Press & CRC Press, Beijing & Boca Ration. Johnson, W.B., Schechtman, G., Zinn, J., 1985. Best constants in moment inequalities for linear combinations of independent and exchangeable random variables. Ann. Probab. 13, 234–253. Petrov, V.V., 1975. Sums of Independent Random Variables. Springer, New York.