A note on the sequence of expected extremes

A note on the sequence of expected extremes

Statistics & Probability Letters 47 (2000) 295 – 300 A note on the sequence of expected extremes Slawomir Kolodynski ∗ Department of Mathematics, Uni...

70KB Sizes 7 Downloads 129 Views

Statistics & Probability Letters 47 (2000) 295 – 300

A note on the sequence of expected extremes Slawomir Kolodynski ∗ Department of Mathematics, University of Tennessee, 121 Ayres Hall, Knoxville, TN 37996, USA Received May 1999; received in revised form September 1999

Abstract Necessary and sucient conditions for a sequence to be an expectation sequence of maximal (or minimal) order statistics c 2000 Elsevier Science B.V. All rights are obtained. Applications to the study of convergence in distribution are given. reserved MSC: 44A60; 62G30; 60E05; 60B10; 60E10 Keywords: Moment problem; Order statistics; Convergence in distribution

1. Introduction Let A = (ai; k ; 16i6k6n) be a triangular matrix of numbers. Kadane (1971, 1974) and Mallows (1973) considered the problem of deciding when such matrix can be written as ai; k =EX i:k for some random variable X , where X i:k is the ith-order statistics of a sample (X1 ; : : : ; Xk ) of i.i.d. copies of X: By formulating an equivalent moment problem they obtained necessary and sucient conditions in terms of determinants of certain matrices created from the diagonal sequence (ak; k ; 16k6n). These conditions, in particular, characterize sequences of expectations of maximal and minimal order statistics as well as lead to Chebyshev-like bounds, without the second moment assumption (cf. Balakrishnan (1990), where the niteness of the second moment is required). These bounds are investigated in a paper by Huang (1998), who considered an analogous moment problem for in nite triangular matrices (ai; k ; 16i6k ¡ ∞). Huang used the results of Kadane and Mallows to derive a simple and intuitive necessary condition for a given sequence of numbers (n ; n¿1) to be the expectations of maximal order statistics for a sample of i.i.d. random variables. Unfortunately, that paper contains a mistake (see counterexamples in Section 2). The purpose of the present paper is to give the correct set of necessary and sucient conditions for a sequence to be an expectation sequence of maximal (or minimal) order statistics as well as to give some applications of this characterization to the study of convergence in distribution. ∗

Tel.: +1-423-974-2461.

c 2000 Elsevier Science B.V. All rights reserved 0167-7152/00/$ - see front matter PII: S 0 1 6 7 - 7 1 5 2 ( 9 9 ) 0 0 1 6 9 - 8

296

S. Kolodynski / Statistics & Probability Letters 47 (2000) 295 – 300

2. Notation and lemmas A sequence (k ; k¿0) will be called an expectation sequence of maximal order statistics if there exist a random variable X with nite expectation such that k = Mk (X ) := E max{X1 ; : : : ; Xk };

k¿1;

where (Xi ; i¿1) is a sequence of i.i.d. copies of X and 0 = M0 (X ) := 0. Similarly, a sequence (k ; k¿0) will be called an expectation sequence of minimal order statistics if k = mk (X ) := E min{X1 ; : : : ; Xk };

k¿1;

0 = m0 (X ) := 0

for some random variable X . Let  denote the forward di erence operator  : R∞ → R∞ ; ()k = k − k+1 ;

 ∈ R∞ :

n will denote the nth iteration of the operator , n = n−1 and (n )k will be the kth element of the sequence n . It follows by induction that   k X k k j (−1) n+j : ( )n = j j=0

In the proofs of Theorems 1 and 2 we will use a sequence to sequence transformation (R{0; 1; :::} → R{1; 2; :::} ) de ned by a 7→ a; ˜ a˜i = (i a)0 ;

i¿1:

(1)

This transformation has the property (k a) ˜ i = (i a)k ; i¿1; k¿0: (2) R R i i Note that ai = [0;1] x (d x) for some measure  on [0; 1] if and only if a˜i = [0;1] x  ◦ (d x), where (x) = 1 − x. In Huang (1998, Theorem 2) it is claimed that condition   k X k (−1) j (3) n+j 60; n; k = 1; 2; 3; : : : j j=0

or equivalently (k )n 60;

n¿1; k¿1

(4)

is sucient for a sequence (k ; k¿0); 0 := 0 to be the expectation sequence of maximal order statistics. We will present two counterexamples to this claim, which will also show that conditions (ii) and (iii) in Theorem 1 are essential. As the rst counterexample consider the sequence n = n; n¿0. Then (k )n = −1 if k = 1; n¿1 and (k )n = 0 if k¿2; n¿1, thus the condition (4) holds. Suppose (n ; n¿0) is the expectation sequence of maximal order statistics for a random variable X . Then Z xn−1 F ← (x) d x; n = 1; 2; : : : ; n = n (0;1)

where F ← (x)=sup{t: F(t)6x} is the generalized upper inverse of F, the distribution of X . By the Dominated Convergence Theorem we should have n = o(n) which is not the case for  = (0; 1; 2; 3; : : :), a contradiction.

(5)

S. Kolodynski / Statistics & Probability Letters 47 (2000) 295 – 300

297

The second counterexample is sequence  such that 0 = 0; 1 = −1 and n = 0 for n¿2. Such  satis es condition (3). However, if (n ; n¿0) is a sequence of expected maximal order statistics of a random variable X with distribution F, 0 := 0 then (k )0 = n

Z (0;1)

fn (x)F ← (x) d x;

where fn (x) = ((1 − x)n − 1)=nx. Again by the Dominated Convergence Theorem, (n )0 = o(n)

(6)

which is not the case for =(0; −1; 0; 0; 0; : : :) because (n )0 =n. These counterexamples address the necessity of conditions (ii) and (iii) in Theorem 1. In fact, they are dual in the sense that if (0; 1; 2; 3; : : :) were the expectation sequence of maximal order statistics for some random variable X , then (0; −1; 0; 0; : : :) would be the expectation sequence of maximal order statistics for −X . The following lemma lists the solutions of the moment problem for closed, half-open and open-unit intervals, that will be used to characterize expectation sequences of maximal order statistics. Lemma 1. Let I =(0; 1); (0; 1]; [0; 1) or [0; 1] and (ak ; k¿0) be a sequence of numbers. Then ak = k¿0 for some nite positive measure on I if and only if 1. for I = (0; 1): (a) (n a)k ¿0; n¿0; k = ¿0; (b) limn→∞ an = 0; (c) limn→∞ (n a)0 = 0. 2. for I = (0; 1] conditions 1(a) and 1(c) above hold; 3. for I = [0; 1) conditions 1(a) and 1(b) hold; 4. for I = [0; 1] condition 1(a) holds.

R I

xk (d x);

Proof. Condition 1(a) comes from the solution to the moment problem Ron the closed-unitRinterval given by Hausdor (1921). Dominated Convergence Theorem applied to integrals [0;1] xn ( d x) and [0;1] (1 − x)n (d x) is used to express the facts that ({0}) = 0 and ({1}) = 0; respectively, and results, respectively, in conditions 1(b) and 1(c).

3. Characterization of expectation sequences of extreme order statistics Now, we are in position to give characterizations of expectation sequences of maximal and minimal order statistics. Theorem 1. A sequence (n ; n¿0) is the expectation sequence of maximal order statistics if and only if the following conditions hold: (i) (k )n 60; n; k = 1; 2; 3; : : : ; (ii) n = o(n); (iii) (n )0 = o(n); 0 := 0. Remark. (n ; n¿0) is the expectation sequence of maximal order statistics for some random variable X if and only if (−(n )0 ; n¿0) is the expectation sequence of minimal order statistics for X .

298

S. Kolodynski / Statistics & Probability Letters 47 (2000) 295 – 300

Proof. Let i := i+2 −i+1 ; i=0; 1; : : : . By Kadane (1974, Theorem 1) sequence  is the expectation sequence of maximal order statistics if and only if there is a nite positive measure  on (0; 1), such that Z i = xi (d x); i¿0 (0;1)

for some positive measure  on (0; 1). We can think about  as a nite positive measure on [0; 1], such that ({0}) = ({1}) = 0. Since (n )k = −(n+1 )k+1 , k = 0; 1; : : : condition (i) means that (i ; i¿0) is a moment sequence for some positive measure  on [0; 1]. It remains to express conditions ({0}) = ({1}) = 0 in terms of . Suppose ({1}) = 0. By Lemma 1 limi→∞ i = limi→∞ (i+2 − i+1 ) = 0, hence limi→∞ i =i = Pi−1 limi→∞ j=0 j =i = 0. Suppose now that ({1}) =  ¿ 0: Then Z 1 + 0 + · · · + i−2 1 1 i = = + (1 + x + · · · + xi−2 )(d x) i i i i [0;1] i−1 1  →  ¿ 0 as i → ∞: ¿ 1 + i i Thus ({1}) = 0 if and only if n = o(n): To show similarly that ({0}) = 0 if and only if (n )0 = o(n) we will use the sequence to sequence transformation a 7→ a, ˜ de ned by (1). Then conditions (i) and ˜ n 60; n; k = 1; 2; 3; : : : and ˜n = o(n). Repeating the above reasoning we get (iii) can be written as (k ) ˜ ({0}) = ({1}) = 0. In the case of nonnegative random variables, condition (iii) in Theorem 1 can be omitted after strengthening condition (i): Theorem 2. A sequence (n ; n¿0); 0 := 0 is the expectation sequence of maximal order statistics of a nonnegative random variable if and only if condition (ii) of Theorem 1 holds and (i0 ) (k )n 60; k = 1; 2; 3; : : : ; n = 0; 1; 2; : : : . Proof. The proof is based on Kadane (1971, Theorem 2), which gives a characterization of expectation sequences of minimal order statistics of a nonnegative random variable. Namely, a sequence (an ; n¿1) is the expectation sequence of minimal order statistics if and only if Z an = xn−1 (d x) (0;1]

for some nite positive measure  on (0; 1]. To convert this statement into characterization of expectation sequence of maximal order statistics we will use formula 3.4.3 from David (1981), which implies that a sequence (bn ; n¿0) is the expectation sequence of maximal order statistics if and only if the sequence (−b˜n ; n¿1) is an expectation sequence of minimal order statistics. Suppose that (n ; n¿0), 0 := 0 ful lls conditions (i0 ) and (ii). Then by (2) (n ) ˜ k 60; and

Z −˜k =

[0;1]

n = 0; 1; : : : ; k = 1; 2; : : :

xk−1 (d x)

for some positive measure  on [0; 1]. Since by condition (ii) and (2) Z − (1 − x)n (d x) = (n ) ˜ 1 = ()n → 0 as n → ∞; [0;1]

S. Kolodynski / Statistics & Probability Letters 47 (2000) 295 – 300

299

we have ({0}) = 0. Thus, conditions (i0 ) and (ii) imply that (−˜k ; k¿1) is the expectation sequence for minimal order statistics and consequently that (n ; n¿0) is the expectation sequence of maximal order statistics. To show that conditions (i0 ) and (ii) are necessary suppose that (k ; k¿0) is the expectation sequence of maximal order statistics. Then (−˜k ; k¿1) is the expectation sequence for minimal order statistics and by Kadane (1971, Theorem 2) we get Z −˜k = xk−1 (d x) (0;1]

˜ k 60, n = 0; 1; : : : ; k = 1; 2; : : : ; and (n ) ˜ 0= measure  on (0; 1]. Thus we have (n ) Rfor some positiven−1 0 [(1 − (1 − x) )=x](d x) = o(n), which by (2) implies conditions (i ) and (ii). (0;1] 4. Application to convergence of random variables Hill and Spruill (1994) give a method for investigating convergence in distribution based on convergence of maximal moments. Theorems 1 and 2 of the previous section allow to simplify the proofs of the main results of Hill and Spruill (1994). Corollary 1. Suppose {Xn ; n¿1} are integrable random variables such that limn→∞ Mk (Xn ) = k exists and is nite for all k¿0. Then there exists a random variable X∞ with Mk (X∞ )= k and Xn → X∞ in distribution if and only if k = o(k) and (k )0 = o(k). Proof. Suppose limn→∞ Mk (Xn ) = k = o(k) and (k )0 = o(k), that is conditions (ii) and (iii) in Theorem 1 hold for the sequence ( k ; k¿0). By Theorem 1 for every n¿1 the sequence (Mk (Xn ); k¿0) ful lls condition (i), hence this condition also holds for the sequence ( k ; k¿0). Thus Theorem 1 implies that ( k ; k¿0) is the expectation sequence of maximal order statistics for some random variable R x X∞ . Denote Fn (x) = P(Xn 6x), Fn← (x) = sup{t: Fn (t)6x} the generalized upper inverse of Fn , Gn (x) = 0 Fn← (s) ds for 06n6∞. Then for every 06n6∞, Gn is a convex function, absolutely continuous on [0; 1]. Because R1 R1 Mk (Xn ) = k 0 xk−1 Fn← (x) d x = k 0 xk−1 Gn (d x); n¿0; k¿1; we have Z 1 Z 1 limn→∞ p(x)Gn (d x) = p(x)G∞ (d x) 0

0

for every polynomial p. This implies that the sequence (Gn ; n¿0) converges weakly to G∞ as a sequence of functions of bounded variation. Hence limn→∞ Gn (x) = G∞ (x) for all x ∈ [0; 1]. Thus (see, for example, 0 ← (x) = F∞ (x) as n → ∞ except possibly at countably many Wayne and Varberg, 1973) Fn← (x) = Gn0 (x) → G∞ points of [0; 1]. Consequently Xn → X∞ as n → ∞. The same argument as before gives: Corollary 2. Suppose {Xn ; n¿1} are integrable nonnegative random variables such that limn→∞ Mk (Xn )= k exists and is nite for each k¿0. Then there exists a random variable X∞ with Mk (X∞ ) = k and Xn → X∞ in distribution if and only if k = o(k). Acknowledgements I wish to thank Professor Jan Rosinski for his help and encouragement while this research was being done and the paper written.

300

S. Kolodynski / Statistics & Probability Letters 47 (2000) 295 – 300

References Balakrishnan, N., 1990. Improving the Hartley–David–Gumbel bound for the mean of extreme order statistics. Statist. Probab. Lett. 9 (4), 291–294. David, H.A., 1981. Order Statistics, 2nd edition. Wiley, New York, 1981. Hausdor , F., 1921. Summationmethoden und momentfolgen, I. Math. Zeitschrift 9, 74 –109. Hill, T., Spruill, M., 1994. On the relationship between convergence in distribution and convergence of expected extremes. Proc. Math. Soc. 121, 1235 –1243, see also Erratum, to appear. Huang, J.S., 1998. Sequence of expectations of maximum-order statistics. Statist. Probab. Lett. 38 (2), 117–123. Kadane, J.B., 1971. A moment problem for order statistics. Ann. Math. Statist. 42, 745–751. Kadane, J.B., 1974. A characterization of triangular arrays which are expectations of order statistics. J. Appl. Probab. 11, 413–416. Mallows, C.L., 1973. Bounds on distribution functions in terms of expectations of order-statistics. Ann. Probab. 1, 297–303. Wayne, R., Varberg, D.E., 1973. Convex Functions. Academic Press, New York.