Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222 www.elsevier.com/locate/jspi
Dependence properties of order statistics夡 Taizhong Hu∗ , Huaihou Chen Department of Statistics and Finance, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China Received 28 May 2006; received in revised form 1 April 2007; accepted 28 September 2007 Available online 13 November 2007
Abstract Let X1:n X2:n · · · Xn:n denote the order statistics of independent random variables X1 , X2 , . . . , Xn with possibly nonidentical distributions for each n. For fixed 1 j1 < j2 < · · · < jr n and (x1 , . . . , xr ) ∈ Rr , it is shown that if j1 − i max{0, n − m} and Ai,m,y = {Xi:m > y}, then i (y) ≡ P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Ai,m,y ] is increasing in y, and that if Ai,m,y is either {Xi:m > y} or {Xi:m y}, then i (y) is decreasing in i for fixed y < x1 < · · · < xr . It is also shown that if each Xk has a continuous distribution function, and if Ai,m,y is either {Xi:m = y} or {Xi−1:m < y < Xi:m }, then i (y) is decreasing in i for fixed y < x1 < · · · < xr , where Xm+1:m = +∞. In particular, we obtain that RTI (Xj :n |Xi:m ) for j − i max{n − m, 0} and LTD (Xj :n |Xi:m ) for j − i min{n − m, 0}. We thus extend the main results in Boland et al. [1996. Bivariate dependence properties of order statistics. J. Multivariate Anal. 56, 75–89] and in Hu and Xie [2006. Negative dependence in the balls and bins experiment with applications to order statistics. J. Multivariate Anal. 97, 1342–1354]. © 2007 Elsevier B.V. All rights reserved. MSC: 60E15; 62N05 Keywords: Balls and bins experiment; Counting process; Left tail decreasing; Order statistics; Right tail increasing
1. Introduction In the literature, there are many papers dealing with dependence properties of order statistics under different distributional scenarios. The study was initiated by Tukey (1958) and Barlow and Proschan (1975), developed by Karlin and Rinott (1980), Block et al. (1987), Shanthikumar (1987), Kim and David (1990), and Boland et al. (1996), and has been continued until the more recent papers by Khaledi and Kochar (2000), Hu and Zhu (2003), Hu and Yang (2004), Avérous et al. (2005), Khaledi and Kochar (2005), and Hu and Xie (2006). Let X1:n X2:n · · · Xn:n denote the order statistics of independent random variables X1 , X2 , . . . , Xn with possibly nonidentical distributions. Recall that a random variable U is said to be right tail increasing in another random variable V, denoted by RTI(U |V ), if P(U > u|V > v) is increasing in v for each u, and that U is said to be left tail decreasing in V, denoted by LTD(U |V ), if P(U u|V v) is decreasing in v for each u. For relationships between RTI, LTD and other positive dependence notions, see Barlow and Proschan (1975). Boland et al. (1996) proved that RTI(Xj :n |Xi:n )
and
LTD(Xi:n |Xj :n ) for 1i < j n,
(1.1)
夡 Supported by National Natural Science Foundation of China (No.: 10771204), and by the National Basic Research Program of China (973 Program, Grant no.: 2007CB814901). ∗ Corresponding author. E-mail address:
[email protected] (T. Hu).
0378-3758/$ - see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2007.09.013
T. Hu, H. Chen / Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222
2215
and that P(Xj :n > t|Xi:n > s)
is decreasing in i, i < j ,
(1.2)
for any fixed s < t and fixed integer j. By exploiting the negative dependence of occupancy numbers in the balls and bins experiment, Hu and Xie (2006) obtained the following multivariate versions of (1.1) and (1.2): For 1 i < j1 < j2 < · · · < jr n and fixed (x1 , . . . , xr ) ∈ Rr , (a) if Ai,n,y = {Xi:n > y}, then P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Ai,n,y ]
(1.3)
is increasing in y; (b) if event Ai,n,y is either {Xi:n > y} or {Xi:n y}, then the conditional probability in (1.3) is decreasing in i for each y; and (c) if each Xk has a continuous distribution function, and if Ai,n,y is either {Xi:n = y} or {Xi−1:n < y < Xi:n }, then (1.3) is decreasing in i for each y, where X0:n = −∞. The purpose of this paper is to generalize the main results, described above, in Boland et al. (1996) and in Hu and Xie (2006), shedding additional light on the dependence structures of order statistics. More precisely, for any positive integers m and n, let X1 , X2 , . . . , Xm∨n be independent random variables with possibly nonidentical distributions. Let X1:n X2:n · · · Xn:n be order statistics of random variables X1 , X2 , . . . , Xn , and let X1:m X2:m · · · Xm:m be order statistics of random variables X1 , X2 , . . . , Xm . Fix 1 j1 < j2 < · · · < jr n and (x1 , . . . , xr ) ∈ Rr . We prove that (a ) if j1 − i max{0, n − m} and Ai,m,y = {Xi:m > y}, then P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Ai,m,y ]
(1.4)
is increasing in y; (b ) if Ai,m,y is either {Xi:m > y} or {Xi:m y}, then (1.4) is decreasing in i for each y, y < min{x1 , . . . , xr }; and (c ) if each Xk has a continuous distribution function, and if Ai,m,y is either {Xi:m = y} or {Xi−1:m < y < Xi:m }, then (1.4) is decreasing in i for each y, y < min{x1 , . . . , xr }, where Xm+1:m = +∞. In particular, we obtain that RTI(Xj :n |Xi:m ) for j − i max{n − m, 0} and LTD(Xj :n |Xi:m ) for j − i min{n − m, 0}. The main results of this paper are presented in Section 3. To prove the main results, some auxiliary results are established in Section 2, concerning the negative dependence in the balls and bins experiment. Throughout the paper, the terms ‘increasing’ and ‘decreasing’ mean ‘non-decreasing’ and ‘non-increasing’, respectively. When an expectation or a probability is conditioned on an event such as V = v, we assume that v is in the support of V. For any set J, denote its cardinality by |J |. 2. Auxiliary results Consider the experiment with m balls {1, 2, . . . , m} and n bins {1, 2, . . . , n}. Suppose that balls are thrown into bins independently. The probability that ball k goes into bin i is pi,k , subject only to the natural restriction that for each ball k, ni=1 pi,k = 1. For i = 1, . . . , n and j = 1, . . . , m, let Bi,j be an indicator random variable defined by 1 if ball j goes into bin i; Bi,j = 0 otherwise.
2216
T. Hu, H. Chen / Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222
Throughout these section, denote by A the set of all subsets of {1, . . . , n} × {1, . . . , m}. For each k = 1, . . . , m, ∈ A and L ⊂ A, define Bi,j , BL = (B , ∈ L), B = (i,j )∈
and
(k)
B =
Bi,j ,
(i,j )∈,j =k
(k)
(k)
BL = (B , ∈ L).
To prove the main result of this section, we shall need the following lemma. Lemma 2.1 (Dubhashi and Ranjan, 1998). Let ∈ A, and let K and L be two proper subsets of A such that ⎞ ⎛ ⎝ ⎠ ∩ = ∅, (2.1) ∪ ∈L
∈K
where K may be an empty set. Then (k)
(k)
(k)
P[BL ∈ 1 |BK ∈ 2 , B = b + 1] max P[BL ∈ 1 |BK ∈ 2 , B = b] 1k m holds for all 1 ⊂ {1, . . . , m}|L| , 2 ⊂ {1, . . . , m}|K| and any integer b 0. Theorem 2.1. Let I, J, K and L be four families of disjoint subsects of {1, . . . , n} × {1, . . . , m} such that , , , are disjoint, ∈L
∈I
∈J
(2.2)
∈K
where one or two of I, J and K may be an empty set. Then for any increasing function : R|L| → R and all nonnegative integer vectors bI , bJ and bK , E[ (BL )|BI bI , BJ bJ , BK = bK ] is decreasing in bI , bJ and bK . Theorem 2.1 states that (BL , BI , BJ , BK ) satisfies certain notion of negative dependence, which reflects the intuition that BL will tend to be “large” when the other subset (BI , BJ , BK ) is “small” and vice versa. Theorem 2.2 in Hu and Xie (2006) is the special case of Theorem 2.1 with I = {{i} × {1, . . . , m} : i ∈ I }, J = {{j } × {1, . . . , m} : j ∈ J }, K = {{k} × {1, . . . , m} : k ∈ K}, L = {{l} × {1, . . . , m} : l ∈ L}, where I, J, K and L are four disjoint subsects of {1, 2, . . . , n}, and one or two of I, J and K may be an empty set. Proof of Theorem 2.1. The proof is similar to that of Theorem 2.2 in Hu and Xie (2006). Let I, J, K and L be four families of disjoint subsets of {1, . . . , n} × {1, . . . , m} satisfying Condition (2.2), and let : R|L| → R be an increasing function. It suffices to prove that for any fixed a ∈ R, the conditional probability (bI , bJ , bK ) ≡ P[ (BL ) > a|BI bI , BJ bJ , BK = bK ] is decreasing in (bI , bJ , bK ).
(2.3)
To prove (2.3), we will need the following inequality: (m)
(m)
(m)
(m)
P[ (BL ) > a|BI bI , BJ bJ , BK = bK ] P[ (BL ) > a|BI bI , BJ bJ , BK = bK ]
(2.4)
T. Hu, H. Chen / Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222
2217
for all a ∈ R, m2 and any nonnegative integer vectors bI , bJ and bK . Fix the number n of bins. We now prove (2.3) and (2.4) synchronously by induction on m, the number of balls. For m = 1 or 2, (2.3) and (2.4) are trivial. Assume that (2.3) and (2.4) hold for the experiment with m − 1 balls {1, 2, . . . , m − 1} and n bins {1, 2, . . . , n}. Then (m)
(m)
P[ (BL ) > a|B (m)
(m)
(m)
(m)
b − 1, BI\{} bI\{} , BJ bJ , BK = bK ] (m)
(m)
(m)
P[ (BL ) > a|BI bI , BJ bJ , BK = bK ]
(2.5)
for a ∈ R and any ∈ I. We now turn to prove (2.3) and (2.4) for the balls and bins experiment with m balls {1, 2, . . . , m}. Step 1: First we prove that (2.4) holds for the balls and bins experiment with m balls. Without loss of generality, assume that I, J, K and L are not empty sets. Then P[ (BL ) > a|BI bI , BJ bJ , BK = bK ] P[ (BL ) > a, BI bI , BJ bJ , BK = bK ] = P[BI bI , BJ bJ , BK = bK ] n P[ (BL ) > a, BI bI , BJ bJ , BK = bK |Bi,m = 1]P(Bi,m = 1) = i=1 n i=1 P[BI bI , BJ bJ , BK = bK |Bi,m = 1]P(Bi,m = 1) P[ (BL ) > a, BI bI , BJ bJ , BK = bK |Bi,m = 1] . min 1i n P[BI bI , BJ bJ , BK = bK |Bi,m = 1]
(2.6)
Since ball (i, m) may be in any one of the index set I, J, K, L and A\(I ∪ J ∪ K ∪ L), five cases arise. • If the minimum in (2.6) occurs for some (l0 , m) ∈ ∈ L, then P[ (BL ) > a|BI bI , BJ bJ , BK = bK ] (m)
P[ (B
(m)
(m)
(m)
(m)
= P[ (B
(m)
(m)
(m)
+ 1, BL\{} ) > a, BI bI , BJ bJ , BK = bK ] (m)
(m)
P[BI bI , BJ bJ , BK = bK ] (m)
(m)
(m)
(m)
+ 1, BL\{} ) > a|BI bI , BJ bJ , BK = bK ] (m)
(m)
(m)
P[ (BL ) > a|BI bI , BJ bJ , BK = bK ], (m)
where the two inequalities follow from Condition (2.2) and the monotonicity of , respectively, and (B (m) (m) (m) (m) 1, BL\{} ) is the value of function at point BL with B replaced by B + 1. • If the minimum in (2.6) occurs for some (i0 , m) ∈ ∈ I, then P[ (BL ) > a|BI bI , BJ bJ , BK = bK ] (m)
(m)
P[ (BL ) > a, B (m)
P[B (m)
(m)
(m)
+
(m)
b − 1, BI\{} bI\{} , BJ bJ , BK = bK ] (m)
(m)
(m)
b − 1, BI\{} bI\{} , BJ bJ , BK = bK ] (m)
= P[ (BL ) > a|B (m)
(m)
(m)
(m)
(m)
b − 1, BI\{} bI\{} , BJ bJ , BK = bK ] (m)
(m)
P[ (BL ) > a|BI bI , BJ bJ , BK = bK ], where the two inequalities follow from (2.2) and (2.5), respectively. • If the minimum in (2.6) occurs for some (j0 , m) ∈ ∈ J or (k0 , m) ∈ ∈ K, then the proof is similar to that of the above case. • If the minimum in (2.6) occurs for some (i0 , m) ∈ ∈ A\(I ∪ J ∪ K ∪ L), then the right hand side of (2.6) is (m)
(m)
(m)
(m)
P[ (BL ) > a, BI bI , BJ bJ , BK = bK ] (m)
(m)
(m)
(m)
(m)
P[BI bI , BJ bJ , BK = bK ] (m)
(m)
= P[ (BL ) > a|BI bI , BJ bJ , BK = bK ].
2218
T. Hu, H. Chen / Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222
Therefore, (2.4) holds for the balls and bins experiment with m balls. Step 2: Next we prove that (bI , bJ , bK ) is decreasing in (bI , bJ , bK ) for the experiment with m balls. The rest of the proof is the same as that of Theorem 2.2 in Hu and Xie (2006) and is omitted. Combining these two steps, we establish (2.3) and (2.4) by induction and thus complete the proof of the theorem. Setting any two of I, J and K to be an empty set, we obtain the following consequences of Theorem 2.1. Corollary 2.1. Let I and L be two families of disjoint subsects of {1, . . . , n} × {1, . . . , m} such that ∩ = ∅ for all ∈ L and ∈ I. Then E[ (BL )|BI bI ] is decreasing in bI for any increasing function : R|L| → R. Corollary 2.2. Let J and L be two families of disjoint subsects of {1, . . . , n} × {1, . . . , m} such that ∩ = ∅ for all ∈ L and ∈ J. Then E[ (BL )|BJ bJ ] is decreasing in bJ for any increasing function : R|L| → R. Corollary 2.3. Let K and L be two families of disjoint subsects of {1, . . . , n} × {1, . . . , m} such that ∩ = ∅ for all ∈ L and ∈ K. Then E[ (BL )|BK = bK ] is decreasing in bK for any increasing function : R|L| → R. 3. Main results Throughout this section, for any positive integers m and n, let X1 , X2 , . . . , Xm∨n be independent random variables with possibly nonidentical distributions. Let X1:n X2:n · · · Xn:n be order statistics of random variables X1 , X2 , . . . , Xn , and let X1:m X2:m · · · Xm:m be order statistics of random variables X1 , X2 , . . . , Xm . In Theorems 3.1–3.3 below, there is no restriction on integers m and n. Theorem 3.1. For any positive integers m and n, let X1 , X2 , . . . , Xm∨n be independent random variables with possibly nonidentical distributions. Fix 1 j1 < j2 < · · · < jr n. (1) If y < x1 < · · · < xr , then P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Xi:m > y]
(3.1)
P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Xi:m y]
(3.2)
and
are both decreasing in i ∈ {1, 2, . . . , m}. (2) If x1 < · · · < xr < y, then P[Xj1 :n < x1 , Xj2 :n < x2 , . . . , Xjr :n < xr |Xi:m < y] and P[Xj1 :n < x1 , Xj2 :n < x2 , . . . , Xjr :n < xr |Xi:m y] are both increasing in i ∈ {1, 2, . . . , m}.
T. Hu, H. Chen / Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222
2219
Proof. We give the proof of the first result only; the second result is the dual case of the first one. Let N (u) be the number of the observations X1 , X2 , . . . , Xn less than or equal to u, and let N1 (u) be the number of the observations X1 , X2 , . . . , Xm less than or equal to u. Then (3.1) is equivalent to P[N(x1 ) < j1 , N (x2 ) < j2 , . . . , N (xr ) < jr |N1 (y) < i].
(3.3)
For y < x1 < · · · < xr , consider the experiment with m ∨ n balls and r + 2 bins and with probabilities {pi,k }, k = 1, . . . , m ∨ n, given by p1,k = P[Xk y],
p2,k = P[y < Xk x1 ),
pr+2,k = P[Xk > xr ],
and pl,k = P[xl−2 < Xk xl−1 ]
for l = 3, . . . , r + 1.
Set I1 = (−∞, y], I2 = (y, x1 ], Ir+2 = (xr , ∞) and Il = (xl−2 , xl−1 ] for l = 3, . . . , r + 1. Since Xk ∈ Ii for any pair (i, k) means that ball k goes into bin i, (3.3) can be rewritten as
r r P B l > n − j 1 , Bl > n − j2 , . . . , Br > n − jr B < i , (3.4) l=2
l=1
where = {1} × {1, 2, . . . , m} and i = {i + 2} × {1, 2, . . . , n}
for i = 1, . . . , r.
By applying Corollary 2.2 with L = {1 , . . . , r } and J = {}, the conditional probability in (3.4) is decreasing in i. Similarly, (3.2) is equivalent to P[N(x1 ) < j1 , N (x2 ) < j2 , . . . , N (xr ) < jr |N1 (y) i]
r r =P B l > n − j 1 , Bl > n − j2 , . . . , Br > n − jr B i , l=1
l=2
which is decreasing in i by applying Corollary 2.1 with L = {1 , . . . , r } and I = {}. This completes the proof.
In Theorem 3.1, if we assume that each Xi has a continuous distribution, we can get the next interesting result. Theorem 3.2. For any positive integers m and n, let X1 , X2 , . . . , Xm∨n be independent random variables, each with a continuous distribution function. Fix 1j1 < j2 < · · · < jr n and denote X0,m = −∞ and Xm+1:m = +∞. (1) If y < x1 < · · · < xr , then P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Xi−1:m < y < Xi:m ] and P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Xi:m = y] are both decreasing in i ∈ {1, 2, . . . , m}. (2) If x1 < · · · < xr < y, then P[Xj1 :n < x1 , Xj2 :n < x2 , . . . , Xjr :n < xr |Xi−1:m < y < Xi:m ] and P[Xj1 :n < x1 , Xj2 :n < x2 , . . . , Xjr :n < xr |Xi:m = y] are both increasing in i ∈ {1, 2, . . . , m}. Proof. A similar argument to that of the proof of Theorem 3.3 in Hu and Xie (2006) can establish the desired result by using Theorem 2.1.
2220
T. Hu, H. Chen / Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222
To prove the next theorem, we give one lemma. Consider the experiment with m ∨ n balls and n bins. Define B1 = B1 ,
Bj = Bj for j = 4, . . . , n ,
(3.5)
where 1 = {1} × {1, . . . , m} and j = {j } × {1, . . . , n}
for j = 4, . . . , n .
The following lemma states that for n 4 and any vector (i, t4 , t5 , . . . , tn ) of nonnegative integers, the conditional probability of the event { nj =l Bj tl , l = 4, . . . , n }, given B1 < i, becomes larger when the probability vector (p1,m , p2,m , p3,m , . . . , pn ,m ) of outcomes for the mth trial (throwing ball m into one of the bins) is replaced by the vector (p1,m + p2,m , 0, p3,m , . . . , pn ,m ). Lemma 3.1. For any vector (i, t4 , t5 , . . . , tn ) of nonnegative integers and n 4, we have ⎤ ⎡ ⎤ ⎡ n n Bj tl , l = 4, . . . , n B1 + B2,m < i ⎦ P ⎣ Bj tl , l = 4, . . . , n B1 < i ⎦ , P⎣ j =l j =l
(3.6)
where the Bj ’s are defined by (3.5). Proof. We give the proof for the case m > n; the proof of lemma 3.1 in Hu and Xie (2006) also valid for the case m n. For any vector (i, t4 , t5 , . . . , tn ) of nonnegative integers, denote by ⎡ ⎤ n A(t4 , . . . , tn , i) = P ⎣ Bj tl , l = 4, . . . , n ; B1 < i ⎦ j =l
and D(i) = P[B1 < i], where B1 = B1 − B1,m . Set pl = pl,m for l = 1, . . . , n . From Corollary 2.2, we have that D(i)A(t, i − 1)A(t, i)D(i − 1),
t = (t4 , . . . , tn ).
(3.7)
To prove (3.6), it suffices to verify that ⎡ ⎤ n P⎣ Bj tl , l = 4, . . . , n ; B1 + B2,m < i ⎦ P[B1 < i] j =l
⎡ ⎤ n P ⎣ Bj tl , l = 4, . . . , n ; B1 < i ⎦ P[B1 + B2,m < i].
(3.8)
j =l
By assigning the value 1 successively to each Bi,m , i = 1, . . . , n , on the both sides of (3.8), we get that the left hand side (LHS) and the right hand side (RHS) of (3.8) are, respectively, LHS = [(p1 + p2 )A(t, i − 1) + (1 − p1 − p2 )A(t, i)] · [p1 D(i − 1) + (1 − p1 )D(i)] and RHS = [p1 A(t, i − 1) + (1 − p1 )A(t, i)]([p1 D(i − 1) + (1 − p1 )D(i)] + p2 [D(i − 1) − D(i)]). After carrying out the multiplications, we get that LHS − RHS = p2 [D(i)A(t, i − 1) − D(i − 1)A(t, i)] 0, where the last inequality follows from (3.7). This completes the proof of the lemma.
T. Hu, H. Chen / Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222
2221
Theorem 3.3. For any positive integers m and n, let X1 , X2 , . . . , Xn∨m be independent random variables with possibly nonidentical distributions. Fix 1 j1 < j2 < · · · < jr n. Then (1) for all 1i m and x1 x2 · · · xr , P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Xi:m > y]
(3.9)
is increasing in y ∈ (−∞, x1 ]; (2) for all 1i m and x1 x2 · · · xr , then P[Xj1 :n < x1 , Xj2 :n < x2 , . . . , Xjr :n < xr |Xi:m < y]
(3.10)
is decreasing in y ∈ [xr , ∞). Proof. We give the proof of the first result only; the second result is the dual case of the first one. Without loss of generality, assume that y < x1 < x2 < · · · < xr . Let the counting processes {N (u)} and {N1 (u)} be as defined in the proof of Theorem 3.1. Then (3.9) is equivalent to P[N(x1 ) < j1 , N (x2 ) < j2 , . . . , N (xr ) < jr |N1 (y) < i]. To prove the first result, we need to verify that for y1 < y2 < x1 , P[N(x1 ) < j1 , N (x2 ) < j2 , . . . , N (xr ) < jr |N1 (y1 ) < i] P[N(x1 ) < j1 , N(x2 ) < j2 , . . . , N(xr ) < jr |N1 (y2 ) < i].
(3.11)
Set I1 = (−∞, y1 ], I2 = (y1 , y2 ], I3 = (y2 , x1 ], Ir+3 = (xr , ∞) and Il = (xl−3 , xl−2 ] for l = 4, . . . , r + 2. Now, consider the experiment with m ∨ n balls and n = r + 3 bins and with probabilities {pi,k } given by pi,k = P[Xk ∈ Ii ]
for k = 1, . . . , m ∨ n and i = 1, . . . , n .
Here, ball k goes into bin i if, and only if, Xk ∈ Ii . Let B1 and {Bj , j = 4, . . . , n } be as defined in (3.5), and set tl = n − jl−3 for l = 4, . . . , n . Then, (3.11) becomes ⎤ ⎡ ⎤ ⎡ n n B > tl , l = 4, . . . , n B1 < i ⎦ P ⎣ B > tl , l = 4, . . . , n B1 + B2 < i ⎦ . (3.12) P⎣
=l
=l The rest of the proof is similar to that of Theorem 3.4 in Boland et al. (1996) or of Theorem 3.1 in Hu and Xie (2006). If the probability vector (p1,k , p2,k , p3,k , . . . , pn ,k ) of outcomes for the kth trial is replaced by the vector (p1,k + p2,k , 0, p3,k , . . . , pn ,k ), then Lemma 3.1 states that the conditional probability of the event { nj =l Bj > tl , l = 4, . . . , n }, given B1 < i, becomes larger. Doing this successively for all trials k (k = 1, . . . , m) yields the desired inequality (3.12). This completes the proof of the theorem. Mi and Shaked (2002) proved that, for any sequence of constants {z1 , z2 , . . .}, zi:m zj :n whenever j i
and
j − i n − m.
(3.13)
Observing this fact, a consequence of Theorem 3.3 is the following corollary. Corollary 3.1. For positive integers m and n, let X1 , X2 , . . . , Xm∨n be independent random variables with possibly nonidentical distributions. Fix 1 j1 < j2 < · · · < jr n. (1) If j1 − i max{n − m, 0}, then P[Xj1 :n > x1 , Xj2 :n > x2 , . . . , Xjr :n > xr |Xi:m > y] is increasing in y for all (x1 , . . . , xr ) ∈ Rr .
(3.14)
2222
T. Hu, H. Chen / Journal of Statistical Planning and Inference 138 (2008) 2214 – 2222
(2) If jr − i min{n − m, 0}, then P[Xj1 :n < x1 , Xj2 :n < x2 , . . . , Xjr :n < xr |Xi:m < y] is decreasing in y for all (x1 , . . . , xr ) ∈ Rr . Proof. We give the proof of the first result only; the second result is the dual case of the first one. Without loss of generality, assume that x1 < x2 < · · · < xr since 1j1 < j2 < · · · < jr n. By Theorem 3.3, we get that the conditional probability in (3.14) is increasing in y ∈ (−∞, x1 ]. By (3.13), j1 − i max{n − m, 0} implies that Xjk :n Xj1 :n Xi:m for any k. Thus, the desired result is trivial for y > xr . If y ∈ (xjk , xjk+1 ] for some k, 1 k < r, then the conditional probability in (3.14) reduces to P[Xjk+1 :n > x1 , . . . , Xjr :n > xr |Xi:m > y], which, again by Theorem 3.3, is increasing in y. This completes the proof.
From Corollary 3.1, we get the next result, which generalizes (1.1) or Theorem 3.4 in Boland et al. (1996). Corollary 3.2. For positive integers m and n, let X1 , X2 , . . . , Xm∨n be independent random variables with possibly nonidentical distributions. Then (1) RTI(Xj :n |Xi:m ) for j − i max{n − m, 0}; (2) LTD(Xj :n |Xi:m ) for j − i min{n − m, 0}. Acknowledgement We thank the referees for comments on a previous draft of the paper. The comments led us to significantly improve the paper. References Avérous, J., Genest, C., Kochar, S.C., 2005. On the dependence structure of order statistics. J. Multivariate Anal. 94, 159–171. Barlow, R.E., Proschan, F., 1975. Statistical Theory of Reliability and Life Testing, Probability Models. Holt, Rinehart, and Winston, New York, NY. Block, H.W., Bueno, V., Savits, T.H., Shaked, M., 1987. Probability inequalities via negative dependence for random variables conditioned on order statistics. Naval Res. Logistics 34, 547–554. Boland, P.J., Hollander, M., Joag-Dev, K., Kochar, S., 1996. Bivariate dependence properties of order statistics. J. Multivariate Anal. 56, 75–89. Dubhashi, D., Ranjan, D., 1998. Balls and bins: a study in negative dependence. Random Struct. Algorithms 13, 99–124. Hu, T., Xie, C., 2006. Negative dependence in the balls and bins experiment with applications to order statistics. J. Multivariate Anal. 97, 1342–1354. Hu, T., Yang, J., 2004. Further developments on sufficient conditions for negative dependence of random variables. Statist. Probab. Lett. 66, 369–381. Hu, T., Zhu, Z., 2003. Stochastic comparisons of order statistics from two samples. Southeast Asian Bull. Math. 27, 89–98. Karlin, S., Rinott, Y., 1980. Classes of orderings of measures and related correlation inequalities. I. Multivariate totally positive distributions. J. Multivariate Anal. 10, 467–498. Khaledi, B.-E., Kochar, S.C., 2000. Dependence among spacings. Probab. Eng. Inform. Sci. 14, 461–472. Khaledi, B.-E., Kochar, S., 2005. Dependence orderings for generalized order statistics. Statist. Probab. Lett. 73, 357–367. Kim, S.H., David, H.A., 1990. On the dependence structure of order statistics and concomitants of order statistics. J. Statist. Plann. Inference 24, 363–368. Mi, J., Shaked, M., 2002. Stochastic dominance of random variables implies the dominance of their order statistics. J. Indian Statist. Assoc. 40, 161–168. Shanthikumar, J.G., 1987. On stochastic comparison of random variables. J. Appl. Probab. 24, 123–136. Tukey, J.W., 1958. A problem of Berkson, and minimum variance orderly estimators. Ann. Math. Statist. 29, 588–592.