Limit theory of generalized order statistics

Limit theory of generalized order statistics

Journal of Statistical Planning and Inference 137 (2007) 1 – 11 www.elsevier.com/locate/jspi Limit theory of generalized order statistics H.M. Baraka...

242KB Sizes 1 Downloads 79 Views

Journal of Statistical Planning and Inference 137 (2007) 1 – 11 www.elsevier.com/locate/jspi

Limit theory of generalized order statistics H.M. Barakat Faculty of Science, Department of Mathematics, Zagazig University, Zagazig, Egypt Received 26 April 2004; received in revised form 22 September 2005; accepted 24 October 2005 Available online 13 December 2005

Abstract Generalized order statistics (gos) were introduced by Kamps [1995. A Concept of Generalized Order Statistics. Teubner, Stuttgart] to unify several models of ordered random variables (rv’s), e.g., (ordinary) order statistics (oos), records, sequential order statistics (sos). In a wide subclass of gos that includes oos and sos, the possible limit distribution functions (df’s) of the maximum gos are obtained in Nasri-Roudsari [1996. Extreme value theory of generalized order statistics. J. Statist. Plann. Inference 55, 281–297]. In this paper, for this subclass, necessary and sufficient conditions of weak convergence, as well as the form of the possible limit df’s of extreme, intermediate and central gos are derived. These results are extended to a wider subclass. © 2005 Elsevier B.V. All rights reserved. MSC: Primary 60F05; 62E20; Secondary 62E15; 62G30 Keywords: Week convergence; Generalized order statistics; Generalized extremes; Generalized central order statistics; Generalized quantiles; Generalized intermediate order statistics

1. Introduction Generalized order statistics (gos) have been introduced and extensively studied in Kamps (1995) as a unified theoretical set-up which contains a variety of models of ordered random variables (rv’s) with different interpretations. Examples of such models are the ordinary order statistics (oos), sequential order statistics (sos), progressive type II censored order statistics (pos), record values, kth record values and Pfeifer’s records. These models can be effectively applied, e.g., in reliability theory. The common approach makes it possible to define several distributional properties at once. The structural similarities of these models are based on the similarity  of their joint density function. Specifically, let n be a natural number, k > 0, m1 , m2 , . . . , mn−1 ∈ R, and Mr = n−1 j =r mj , 1 r n − 1, be parameters such that r = k + n − r + Mr > 0 for all r ∈ {1, 2, . . . , n − 1}, and let m ˜ = (m1 , m2 , . . . , mn−1 ), if n 2 (m ˜ ∈ R arbitrary, if n = 1). If the rv’s X(r, n, m, ˜ k), r = 1, 2, . . . , n, possess a joint density of the form (m,k) ˜

(m,k) ˜

(x1 , x2 , . . . , xn ) f1,2,...,n:n (x1 , x2 , . . . , xn ) = fX(1,n,m,k),X(2,n, ˜ m,k):n ˜ ⎛ ˜ ⎞  m,k),...,X(n,n,  n−1 n−1 n    =k⎝ j ⎠ (1 − F (xi ))mi (1 − F (xn ))k−1 f (xi ), j =1

i=1

− ∞ < x1 < · · · < xn < ∞ E-mail address: [email protected]. 0378-3758/$ - see front matter © 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2005.10.003

i=1

2

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

then they are called gos. Choosing the parameters appropriately, models such as oos (m1 = m2 = · · · = mn−1 = 0, k = 1), order statistics with non-integral sample size (m1 = m2 = · · · = mn−1 = 0, k =  − n + 1, and  is any positive real number such that  > n − 1), kth record values (m1 = m2 = · · · = mn−1 = −1 and k is any positive integer), sos (mi = (n − i + 1)i − (n − i)i+1 − 1, 1 i n − 1, k = n and 1 , 2 , . . . , n > 0), pos with censoring scheme (R1 , R2 , . . . , Rm ) (mi = Ri , i = 1, 2, . . . , m − 1; mi = 0, i = m, m + 1, . . . , n − 1, and k = Rm + 1, see Balakrishnan and Aggarwala, 2000), and Pfeifer’s record values (mi = i − i+1 − 1, k = n and 1 , 2 , . . . , n > 0) are seen to be particular cases. In a wide subclass of gos, specifically when m1 = m2 = · · · = mr−1 = m, representation of marginal distribution (m,k) ˜ function (df) r:n (x) = P (X(r, n, m, ˜ k) x) is given in Kamps (1995). Namely, m,k) ˜ (x) = 1 − Cr−1 (1 − F (x))r (r:n

r−1

1

j =0

j !Cr−j −1

(gm (x))j ,

where if m  = −1, (m + 1)gm (x) = Gm (x) = 1 − (1 − F (x))m+1 is a df, while g−1 (x) = − log(1 − F (x)), and

(m,k) ˜ Cr−1 = ri=1 i , r = 1, 2, . . . , n, with n = k. The possible limit df’s of n:n , i.e., the limit df of the maximum gos, under the condition m1 = m2 = · · · = mn−1 = m  = −1 (in this case, clearly, the record values are excluded) and their domain of attraction under linear normalization are shown in Nasri-Roudsari (1996). By using the technique of Christoph and Falk (1996), analogous results under power normalization are derived by Nasri-Roudsari (1999). If mi = m, 1 i n − 1, the corresponding gos are called m-generalized order statistics (m-gos) (cf. Cramer, 2003). Possible nondegenerate limit distributions and the convergence rate of the upper extreme m-gos, i.e., (n − r + 1)th m-gos for fixed r, are discussed in Nasri-Roudsari and Cramer (1999). The asymptotic normality of intermediate and central gos, which depends on the differentiability of the underlying df F , was derived by Cramer (2003) (see Section 5.7 in Cramer, 2003). In this paper the possible limit df’s of the m-gos X(n−r +1, n, m, ˜ k) (upper gos), when m  = −1, are derived in the following distinct cases: (1) extreme case, where r is any fixed integer and 1 r < n, (2) central case, where r → ∞ and r/n →  ∈ (0, 1), as n → ∞, (3) intermediate case, where min(r, n − r) → ∞ and r/n → 0, as n → ∞. w

The necessary and sufficient conditions for the weak convergence, as n → ∞ (written −→) of the df’s of the gos in n the above-mentioned cases are obtained. These n−1 results are extended to a more general case, where m1 = m2 = · · · = ˇ n = (1/(r − 1)) ˇ mn−r = m  = −1 and M j =n−r+1 mj → M, as n → ∞ (see Remark 1.1). In this connection an ˇ = −1. Namely, in the above three cases, if M ˇ = −1, we get interesting and unexpected result is obtained when M w

w

˜ k), where Xn = Yn means that the df’s of the rv’s Xn and Yn weakly converge to X(n − r + 1, n, m, ˜ k) = X(n, n, m, n n the same nondegenerate df. Finally, it is worth mentioning that the proof here is different, shorter and easier than the one in Nasri-Roudsari (1996). Remark 1.1. In many real-life applications (e.g., sos and pos models) it may happen that the values of the parameters (m1 , m2 , . . .) themselves change with n. Thus, a more flexible subclass of gos model than m-gos must have a triangular (n) scheme of parameters mj , 1 j < n. In this case we consider a wider subclass of gos, than m-gos, for which there exists  (n) ˇ n = (1/(r − 1)) r−1 m(n) → M, ˇ a natural number n0 such that ∀n > n0 , we have m = m, ∀1 j < n − r, while M j

j =1

(n)

n−j

as n → ∞. To simplify the notation, we will use mj instead of mj , whenever, this does not lead to confusion. (m,k) ˜

Nasri-Roudsari (1996) gave a different representation for the df n−r+1:n (x), which is more useful in the study of extreme value theory. In the following lemma, this representation will be given in a slightly different form which will be a basic tool in the study the general limit theory of the gos (extreme, central and intermediate cases). x Lemma 1.1. Let Ix (a, b) = (1/B(a, b)) 0 t a−1 (1 − t)b−1 dt denote the incomplete beta function, and let m1 =  ˇ n = (1/(r − 1)) n−1 m2 = · · · = mn−r = m. Furthermore, let M j =n−r+1 mj be the arithmetic mean of the parameters

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

3

mn−r+1 , . . . , mn−1 . If one of the conditions C1 : m > − 1 and

ˇ n − 1 M

and C2 : m > − 1,

ˇn<−1 M

ˇ n + 1) max(r − 1, n − r) and k > − (M

is satisfied, then for any r ∈ {2, . . . , n}, we get (m,k) ˜

n−r+1:n (x) = IGm (x) (N  − Rn + 1, Rn ), where Rn

k = + m+1

N = n +



(1.1)

   ˇn+1 ˇn+1 M M r− , m+1 m+1

(1.2)

ˇ n + 1) ˇ n − m)r − (M k (M + , m+1 m+1

(1.3)

and Gm (x) = 1 − (1 − F (x))m+1 is a df. Moreover, if m > − 1 and r = 1, then (1.1) is satisfied with (1.2), (1.3) and ˇ n = 0. M Proof. The representation (1.1) follows directly from Lemma 2.5 in Nasri-Roudsari (1996). Namely, replacing r by n − r + 1 one obtains

 (m,k) ˜ n−r+1:n (x) = IGm (x) n − r + 1, n−r+1 = IGm (x) (N  − Rn + 1, Rn ). m+1 On the other hand, if r = 1 and m > − 1, representation (1.1) follows immediately from formula 3 (Remark 2.6) ˇ n are chosen such that in Nasri-Roudsari (1996). Now, we notice that representation (1.1) holds only if m and M      Rn , N − Rn + 1 > 0. Since k > 0 and r 1, the condition C1 clearly implies Rn , N − Rn + 1 > 0. Moreover, if ˇ n + 1)(r − 1), but the necessity of the condition ˇ n < − 1, we get Rn , N  − Rn + 1 > 0, only if k > − (M m > − 1, while M ˇ n + 1) > 0 (see Remark 2.2 in Kamps, 1995) implies the necessity of condition r = k + n − r + Mr = k + (n − r)(M ˇ n + 1)(n − r). This means that condition C2 , when r 2 is sufficient for Rn , N  − Rn + 1 > 0 and r to be k > − (M positive (the condition r > 0 is a necessary condition for the parameters of gos). The lemma is thus established.  ˇ n = m. This leads to Rn = k/(m + Remark 1.2. In the particular case m1 = m2 = · · · = mn−1 = m(> − 1), we get M  1) + r − 1 = R and N = n + k/(m + 1) − 1 = N . Remark 1.3. When the condition C2 is satisfied, it is true that k = k(n) → ∞, as n → ∞. 2. Main result 2.1. Extreme case The principal concern of extreme value theory is to give conditions under which there exist normalizing constants n > 0 and n such that ˜ (0,1)

w

˜

n−r+1:n (n x + n ) = IF (n x+n ) (n − r + 1, r) −→ (r0,1) (x), n

˜ (0,1)

where r

˜ (0,1) r ,

(2.1)

(x) is some nondegenerate df. In this case we say that F belongs to the domain of attraction of the limit df ˜ (0,1)

written F ∈ D(r

). The central result of the extreme value theory is that the class of possible limit df’s in

4

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

(2.1) is restricted to essentially three different types 1 − r (Ui, (x)), i = 1, 2, 3, where r (x) = (1/(r)) is the incomplete gamma function and

x 0

t r−1 e−t dt

U1 (x) = U1; (x) = e−x ∀x;  ∞, x 0, U2; (x) = x − , x > 0,  (−x) , x 0, U3; (x) = 0, x > 0.

(2.2)

˜ (0,1)

Moreover, (2.1) is satisfied with r

(x) = 1 − r (Ui, (x)) for some i ∈ {1, 2, 3} if, and only if,

nF (n x + n ) = n(1 − F (n x + n )) −→ Ui, (x)

as n → ∞.

(2.3)

The following theorem, and its corollaries, extend the above result to the upper extreme gos X(n − r + 1, n, m, ˜ k). Moreover, Corollaries 2.2 and 2.3 extend the result of Nasri-Roudsari and Cramer (1999) (for the case r 2) to a wider subclass of gos than m-gos. Theorem 2.1. Let m1 = m2 = · · · = mn−1 = m > − 1 and r ∈ {1, 2, . . . , n}. Then, there exist normalizing constants an > 0 and bn for which w

(m,k) ˜

˜ n−r+1:n (an x + bn ) −→ (rm,k) (x),

(2.4)

n

(m,k) ˜

where r

(x) is a nondegenerate df if, and only if, there exist normalizing constants n > 0 and n such that

˜ (0,1)

w

˜

n−r+1:n (n x + n ) −→ (r0,1) (x) = 1 − r (Ui, (x)), n

(m,k) ˜

In this case r

i ∈ {1, 2, 3}.

(2.5)

(x) = 1 − R (Ui,m+1  (x)), where R = k/(m + 1) + r − 1. Moreover, an and bn may be chosen such

that bn = (n) and bn = (n) , where (n) = n1/(m+1) .

Proof. Since Gm (x) = 1 − (1 − F (x))m+1 = 1 − F m+1 (x) is a df, Gm (x) = 1 − Gm (x) = F m+1 (x) and N = n + k/(m + 1) − 1 → ∞, as n → ∞, the result of Smirnov (1952) (Theorem 3, p. 133, or Lemma 3.1 in Barakat, 1997) and Lemma 1.1, yield that (m,k) ˜

(1 − R (N Gm (an x + bn ))) − N n−r+1:n (an x + bn ) = IGm (an x+bn ) (N − R + 1, R) (1 − R (N Gm (an x + bn ))) + N , where N and N converge to zero, as N → ∞ (or equivalently, as n → ∞). This yields that (1.1) is satisfied if, and (m,k) ˜ only if, N Gm (an x + bn ) = NF m+1 (an x + bn ) → U˜ (x), as n → ∞, where r (x) = 1 − R (U˜ (x)). On the other (m,k) ˜ hand, for all x for which r (x) > 0, we obviously have N Gm (an x + bn ) ∼ nGm (an x + bn ) = nF m+1 (an x + bn ). Therefore, we conclude that (1.1) is satisfied if, and only if, nF m+1 (an x + bn ) → U˜ (x)

as n → ∞,

(2.6) ˜ ˜ (0,1) (0,1) D(r ), where r (x)=1−r (Ui, (x)), i

(m,k) ˜ where r (x)=1−R (U˜ (x)). Now, let (2.5) be satisfied, i.e., F ∈ ∈ {1, 2, 3}. Then (2.3) obviously yields (since, (n) → ∞, as n → ∞) (n)F ((n) x + (n) ) → Ui, (x), as n → ∞. (m+1) Therefore, nF (m+1) ((n) x + (n) ) → Ui, (x), as n → ∞. Thus, in view of (2.6), we conclude that (2.4) is (m,k) ˜

satisfied with an = (n) , bn = (n) , and the nondegenerate df r

(m+1)

(x) = 1 − R (Ui, (x)). Let us now turn to the (m,k) ˜ converse. Let (2.4) be satisfied. Then (2.6) is satisfied, where r (x) = 1 − R (U˜ (x)) is a nondegenerate df. Thus, we get (n)F (an x + bn ) −→ U˜ 1/(m+1) (x) as n → ∞.

(2.7)

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

5

The latter implies that ˜ 1/(m+1) (x)

F (n) (an x + bn ) −→ e−U

as n → ∞,

(2.8) ˜ (0,1)

(in fact, in view of the extreme value theory, (2.7) implies (n)−r+1:(n) (an x + bn ) −→ 1 − r (U˜ 1/(m+1) (x)), as n → ˜ 1/(m+1)

(x) is a nondegenerate df. On the other hand, we have ((nt)/(n)) → t 1/(m+1) , ∞, r = 1, 2, . . . ,), where e−U as n → ∞, for all t ∈ R+ . Thus, (n) is regularly varying function at (+∞) with index = (1/m + 1) > 0 (for the definition of regularly varying functions, see de Haan, 1970). Hence, an application of Theorem 3 in Xie (1997) yields that the function U˜ 1/(m+1) (x) can take one and only one of the types (2.2). This obviously completes the proof ˜ (0,1) (note that, in this case  (a −1 x + b −1 ) −→ 1 − r (U˜ 1/(m+1) (x)), as n → ∞, where −1 (n) = nm+1 and n−r+1:n





(n)

(n)

U˜ 1/(m+1) (x) has one and only one of the types (2.2)).



Corollary 2.1. Let m1 = m2 = · · · = mn−1 = m < − 1, k  > 0, and let (2.5) be satisfied. Then, (m,k ˜  −(n−1)(m+1))

n−r+1:n

w

−(m+1)

( (n) x +  (n) ) −→ 1 − R  (Ui, n

(x)),

where  (n) = 1/(n) = n−1/(m+1) and R  = −k  /(m + 1) + r − 1. Proof. By using the same argument which was applied in Section 4 (the case m < − 1) in Nasri-Roudsari (1996), the corollary is proved exactly as Corollary 4.1 in Nasri-Roudsari (1996).  ˇ > − 1. Then, Corollary 2.2. Let m1 = m2 = · · · = mn−r = m and let condition C1 of Lemma 1.1 be satisfied, such that M ˇ + 1)/(m + 1))r − ((M ˇ + 1)/(m + 1)), the statement of Theorem 2.1 is still true if we replace R by R  = k/(m + 1) + ((M i.e., (m,k) ˜

w

(m+1)

m,k) ˜ (x) = 1 − R  (Ui, n−r+1:n (an x + bn ) −→ ( r n

(x))

if, and only if, (2.5) is satisfied. Proof. The proof follows from the proof of Theorem 2.1, by replacing N and R by N  and Rn , respectively, and noting that Rn ∼ R  , and N  ∼ N ∼ n, as n → ∞.  ˇ = −1. Then, we get the following interesting result: Corollary 2.3. Let m1 = m2 = · · · = mn−r = m > − 1 and let M (m,k) ˜

w

(m,k) ˜

w

n−r+1:n (an x + bn ) = 1:n (an x + bn ) −→ 1 −  n

n

k m+1

(m+1)

(Ui,

(x))

as n → ∞.

˜ 1) (in this case m = k = 1, i = 1 + 2(n − i), i ∈ {1, 2, . . . , n − 1}, and Example 2.1. Consider a sos X(n − r + 1, n, 1, n =k =1), with i =2−(1/n−i +1), i ∈ {1, 2, . . . , n−1} (see Kamps, 1995). Under the condition of Theorem 2.1, the ˜ (1,1)

possible limit df’s are of the form r (x)=1−R (Ui,2 (x)), i =1, 2, 3, where R=k/2+r −1. Moreover, let m ˜ be such  that m1 =m2 =· · ·=mn−r =m >−1 and mn−r+1 =· · ·=mn−1 =m >−1. In this case if we let {i }, i =1, 2, . . . , n−1, be such that m + 1 = (n − i + 1)i − (n − i)i+1 , 1 i n − r, and m + 1 = (n − i + 1)i − (n − i)i+1 , n − r + 1 i n − 1, the gos X(n − r + 1, n, m, ˜ 1) can be interpreted as sos with i = i /(n − i + 1), n − r + 1 i n − 1 and (n − i + 1)i = (n − i)i+1 /(i + 1) + m + 1, 1 i n − r. The possible limit df’s of X(n − r + 1, n, m, ˜ 1) are (m,1) ˜ (m+1) (x) = 1 − R  (Ui, (x)), i = 1, 2, 3, where R  = k/(m + 1) + ((m + 1)/(m + 1))r − ((m + 1)/(m + 1)). r (m+1)

˜ 1) are 1 − k/(m+1) (Ui, Finally, if m = −1 the possible limit df’s of X(n − r + 1, n, m,

(x)), i = 1, 2, 3.

Remark 2.1. The subclass of gos, which is considered in Corollary 2.2 and Example 2.1 (see, also Remark 1.1), is not only a wider subclass of gos than m-gos, but also is more flexible, especially, when we decide to choose a limit type of the possible limit df’s of the upper extreme gos (by performing a goodness of fit test) as the asymptotic df for the given upper extreme gos. To see this fact, consider the sos model which is an extension of the oos model and serves

6

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

as a model describing certain interactions among the system components caused by failures of the components. To be more specific, suppose for a given large n we observe that m1 = m2 = · · · = mn−r = m while the remaining parameters mi , n − r + 1i n − 1, take different arbitrary values. For example, we may have m1 = m2 = · · · = mn−r = m  = m =mn−r+1 =mn−r+2 =· · ·=mn−1 (where m’s here may be the estimated values of the actual parameters). In this case  . Moreover, performing a goodness of fit test on the types 1−  (U (m+1) (x)), i =1, 2, 3, to choose one ˇ we have M=m R i, (m+1)

of them, clearly will give more accurate result than performing such a test on the types 1 − R (Ui,

(x)), i = 1, 2, 3.

2.2. Central case √ Consider a variable rank sequence r = rn → ∞ and n(rn /n − ) → 0, as n → ∞, where 0 <  < 1. Smirnov (1952) (see also, Leadbetter et al., 1983) showed that if there exist normalizing constants n > 0 and n such that ˜ (0,1)

w

˜

n−rn +1:n (n x + n ) = IF (n x+n ) (n − rn + 1, rn ) −→ (0,1) (x; ),

(2.9)

n

˜

˜

where (0,1) (x; ) is some nondegenerate df, then (0,1) (x; ) must have to be one and only one of the types N(Wi; (x)), i = 1, 2, 3, 4, where N(.) denotes the standard normal df and  −∞, x 0, W1; (x) = cx  , x > 0, c,  > 0,  −c|x| , x 0, W2; (x) = ∞, x > 0, c,  > 0,  −c1 |x| , x 0, c1 > 0, W3; (x) = x > 0, c2 ,  > 0, c2 x  ,  W4; (x) = W4 (x) =

−∞, 0, ∞,

x  − 1, −1 < x 1, x > 1. ˜

˜

In this case F belongs to the -normal domain of attraction of the df (0,1) (x; ), written F ∈ D ((0,1) (x; )). ˜ Moreover, (2.9) is satisfied with (0,1) (x; ) = N(Wi; (x)), for some i ∈ {1, 2, 3, 4} if, and only if, √  − F (n x + n ) n −→ Wi; (x) as n → ∞, (2.10) C √ where C = (1 − ). The following lemma is an essential tool in studying the limit df’s of the central gos. √ Lemma 2.1. Let rn be such that n(rn /n − ) → 0, as n → ∞, where 0 <  < 1. Furthermore, let m1 = m2 = · · · = mn−1 = m > − 1. Then, there exist normalizing constants an > 0 and bn for which (m,k) ˜

w

˜ (x; ), n−rn +1:n (an x + bn ) −→ (m,k)

(2.11)

n

˜ (x; ) is a nondegenerate df if and only if, where (m,k)

√  − F (m+1) (an x + bn ) n −→ W (x) C

as n → ∞,

(2.12)

˜ (x; ) = N(W (x)). where (m,k)

√ √ Proof. First we note that if n(rn /n − ) → 0, as n → ∞, then N (RN /N − ) → 0, as n → ∞, where RN = k/(m + 1) − rN + 1. Keeping in mind that Gm is a df, and by using the result of Smirnov (1952)

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

7

(Lemma 2, p. 93, see also Theorem 2.5.2 in Leadbetter et al., 1983), we get   √  − Gm (an x + bn ) (m,k) ˜ N DN (an x + bn ) = n−rn +1:n (an x + bn ) − N C   √  − Gm (an x + bn ) = IGm (an x+bn ) (N − RN + 1, RN ) − N N C converges uniformly to zero. Thus, (2.11) is satisfied if, and only if, √  − Gm (an x + bn ) N −→ W (x) C

as n → ∞,

(2.13)

˜ (x; ). Finally, since, N ∼ n, as n → ∞, we conclude that (2.12) and (2.13) are equivalent, where N(W (x)) = (m,k) which was to be proved. 

√ Theorem 2.2. Let rn be such that n(rn /n − ) → 0, as n → ∞, where 0 <  < 1. Furthermore, let m1 = m2 = · · · = mn−1 = m > − 1. Then, there exist normalizing constants an > 0 and bn for which (2.11) is satisfied, for ˜ (x; ) if, and only if, for the same normalizing constants a and b , we have F ∈ some nondegenerate df (m,k) n n D(m) (N(Wi; (x))), for some i ∈ {1, 2, 3, 4}, where (m) = 1/(m+1) . That is, in view of (2.10), we have √ (m) − F (an x + bn ) n −→ Wi; (x), C(m)

i ∈ {1, 2, 3, 4}.

(2.14)

˜ (x; ) = N((C    In this case we get (m,k) (m) /C )(m + 1)Wi; (x)), where C = C /.

Proof. Let F ∈ D(m) (N(Wi; (x))) (for the same normalizing constants an > 0 and bn ), i.e., (2.14) be satisfied. Then, we get

m+1 C(m) Wi; (x) F m+1 (an x + bn ) =  1 − (1 + ◦(1)) √ (m) n

(m + 1)C(m) Wi; (x) = 1− (1 + ◦(1)) . √ (m) n The latter relation obviously implies, as n → ∞, C(m) √  − F (m+1) (an x + bn ) (m + 1)C(m) n −→ Wi; (x) = (m + 1)Wi; (x). C (m)C C (m,k) ˜

w

˜ (x; ), where (m,k) ˜ (x; ) = N((C   Therefore, Lemma 2.1 implies that n−rn +1:n (an x + bn ) −→ (m,k) (m) /C )(m + n

1)Wi; (x)). Conversely, assume that there exist an > 0 and bn such that (2.11) is satisfied with some nondegener˜ (x; ). Thus, in view of Lemma 2.1 we have √n ( − F m+1 (a x + b ))/C = √n ( − G ate df (m,k) n n m+1 (an x +  ˜ (x; ) = N(W (x)). Consequently, we get G(a x + b ) = (1 − → W (x), as n → ∞, where (m,k) bn ))/C −√ n n √ (C W (x)/ n) (1 + ◦(1))). This yields, F (an x + bn ) = (m)(1 − (C W (x)/(m + 1) n) (1 + ◦(1))), or equivalently √ n ((m) − F (an x + bn ))/C(m) = (C W (x)/(m + 1)C(m) ) (1 + ◦(1)). In view of (2.10), this immediately implies that F ∈ D(m) (N(C W (x)/(m + 1)C(m) )). The theorem is established.  ˜ (x; ) = ˜ k = 1, (i.e., the case of oos ) we get C  = C  . Therefore, we have (m,k) Remark 2.2. Clearly, if m ˜ = 0, (m)  N(Wi; (x)), i ∈ {1, 2, 3, 4}.

ˇ > 0. Furthermore, Corollary 2.4. Let√ m1 =m2 =· · ·=mn−rn =m, and let condition C1 of Lemma 1.1 be satisfied with M let rn be such that n(rn /n − ) → 0, as n → ∞, where 0 <  < 1. Then, there exist normalizing constants an > 0

8

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

˜ (x; ) if, and only if, for the same normalizing and bn for which (2.11) is satisfied for some nondegenerate df (m,k) ´ = ´ 1/(m+1) and constants an and bn , we have F ∈ D´ (m) (N(Wi; (x))), for some i ∈ {1, 2, 3, 4}, where (m) √ ´ ˇ + 1)/(m + 1)). That is, in view of (2.10), we have n ((m) − F (an x + bn ))/C ´ −→ Wi; (x). In this ´ = ((M ˜ (x; ) = N((C  case we get (m,k) ´

(m)

(m)

/C ´ )(m + 1)Wi; (x)). 

Proof. The proof is similar to the proof of Corollary 2.2.



ˇ = −1. Then, we get the following interesting result: Corollary 2.5. Let m1 = m2 = · · · = mn−rn = m > − 1, and let M (m,k) ˜

w

(m,k) ˜

w

n−rn +1:n (an x + bn ) = 1:n (an x + bn ) −→ 1 −  n

n

k m+1

(m+1)

(Ui,

(x)),

where an = (n) and bn = (n) . Moreover, n and n are the normalizing constants defined in (2.5). For 0 <  < 1, let rn = [n] + 1, where [n] represents the integer part of n. Then X(rn , n, m, ˜ k) represents the th generalized sample quantile and is a generalized central order statistics. The following result is an extension of the ˜ 1)). well-known result concerning the ordinary sample quantiles (X(rn , n, 0, Corollary 2.6 (generalized quantiles). Let F be a continuous df with the probability density function f . Furthermore, let F (xo ) = 1/(m+1) = (m), and f (xo ) > 0. Then, (m,k) ˜

w

n−rn +1:n (an x + xo ) −→ N(x), n

√ where an = (m)C /(m + 1) nf (xo ). Proof. The proof follows immediately by combining Theorem 2.2 with Theorem 8.5.1 in Arnold et al. (1992).  2.3. Intermediate case By an intermediate rank sequence, we mean a sequence {rn } such that rn → ∞, as n → ∞, but rn = ◦(n). Wu (1966) (see also, Leadbetter et al., 1983) showed that if {rn } is a nondecreasing intermediate rank sequence, and if there are normalizing constants n > 0 and n such that ˜ (0,1)

w

˜

n−rn +1:n (n x + n ) = IF (n x+n ) (n − rn + 1, rn ) −→ (0,1) (x),

(2.15)

n

˜

˜

where (0,1) (x) is some nondegenerate df, then (0,1) (x) must have to be one and only one of the types N(Vi (x)), i = 1, 2, 3, where V1 (x) = x ∀x,  − ln |x|, x 0, V2 (x) = ∞, x > 0,  −∞, x 0, V3 (x) =  ln |x|, x > 0, ˜

where  is some positive constant. In this case, F belongs to the domain of attraction of the df (0,1) (x), written ˜ ˜ F ∈ D((0,1) (x)). Moreover, (2.15) is satisfied with (0,1) (x) = N(Vi (x)), for some i ∈ {1, 2, 3} if, and only if, rn − nF (n x + n ) −→ Vi (x) as n → ∞. √ rn The following lemma is an essential tool in studying the limit df’s of the intermediate gos.

(2.16)

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

9

Lemma 2.2. Let m1 = m2 = · · · = mn−1 = m > − 1, and let rn be a nondecreasing intermediate rank sequence. Then, there exist normalizing constants an > 0 and bn such that w

(m,k) ˜

˜ (x), n−rn +1:n (an x + bn ) −→ (m,k)

(2.17)

n

˜ (x) is a nondegenerate df if, and only if, where (m,k)

rN − nF (m+1) (an x + bn ) −→ V (x) √ rN

as n → ∞,

˜ (x) = N(V (x)). where (m,k)

Proof. Since rn is a nondecreasing intermediate rank sequence and Rn = k/(m + 1) + rn − 1, clearly RN is also nondecreasing intermediate rank sequence. By Lemma 1 in Wu (1966) (see also Lemma 2, p. 93, in Smirnov, 1952), it can be shown that   RN − N Gm (an x + bn ) (m,k) ˜ DN (an x + bn ) = n−rn +1:n (an x + bn ) − N √ RN   RN − N Gm (an x + bn ) = IGm (an x+bn ) (N − RN + 1, RN ) − N √ RN converges uniformly to zero. Thus, (2.17) is satisfied if, and only if, RN − N Gm (an x + bn ) −→ V (x) √ RN

as n → ∞,

˜ (x). On the other hand, since N ∼ n, R ∼ r , as n → ∞, G = F (m+1) , and for all x for where N(V (x)) = (m,k) N n m ˜ (x) > 0, we have F (m+1) (a x + b ) ∼ R /N ∼ r /n, as n → ∞, then the claimed result immediately which (m,k) n n N n follows. 

Theorem 2.3. Let m1 = m2 = · · · = mn−1 = m > − 1, and let rn be a nondecreasing intermediate rank sequence. Furthermore, let rn be a variable rank sequence defined by rn = rS −1 (N) ,

(2.18)

where S(n) = rN /(rN /n)1/(m+1) . Then, there exist normalizing constants an > 0 and bn for which (2.17) is satisfied ˜ (x) if, and only if, there are normalizing constants  > 0 and  for which for some nondegenerate df (m,k) n n ˜ (0,1)

w

n

n

˜

n−r  +1:n (n x + n ) −→ (0,1) (x),

(2.19)

 ˜ where (0,1) (x) is some nondegenerate df, or equivalently (rn − nF (n x + n ))/ rn −→ Vi (x), as n → ∞, i ∈ ˜

{1, 2, 3}, and (0,1) (x)=N(Vi (x)). In this case an and bn may be chosen such that an =S(n) and bn =S(n) . Moreover, ˜ (x) must have the form N((m + 1)V (x)), i.e., V (x) = (m + 1)V (x). (m,k) i i Remark 2.3. In the particular case m = 0, k = 1, i.e., the case of oos, we have S(n) = n. Thus, rn = rn , n = an and n = bn . Example 2.2. Let the intermediate rank sequence rn be such that rn ∼ 2 n , as n → ∞, where 0 <  < 1 and 0 <  < + ∞. Then, S(n) = 2m/(m+1) n(1+m)/(m+1) (1 + ◦(1)) and S −1 (n) = −2m/(1+m) n(1+m)/(1+m) (1 + ◦(1)). Clearly, function S(n) and its inverse S −1 (n) converge to +∞, as n → ∞. Moreover, it can be shown that rn = (2+m)/(1+m) n(m+)/(m+1) (1 + ◦(1)), which is obviously a nondecreasing intermediate rank sequence.  /S(n) = (r /n)1/(m+1) = ((N/n) (r /N ))1/(m+1) −→ 0, as n → ∞. Proof of Theorem 2.3. First, we note that rS(n) N N Now, let (2.19) be satisfied with (2.18).Then, in view of the result of Wu (1966), there are normalizing constants n > 0

10

H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

 ˜ and n for which (rn − nF (n x + n ))/ rn −→ Vi (x), as n → ∞, i ∈ {1, 2, 3}, where (0,1) (x) = N(Vi (x)). On the other hand, since S(n) → ∞, as n → ∞, we get, in view of (2.16),  rS(n) − S(n)F (S(n) x + S(n) ) −→ Vi (x) as n → ∞.   rS(n)

This implies  Gm (S(n) x + S(n) ) = F (m+1) (S(n) x + S(n) ) = 

(m+1)

=

 rS(n)

S(n)

 rS(n)

(m+1)

S(n)





⎞(m+1)

Vi (x) ⎜ ⎟ (1 + ◦(1))⎠ ⎝1 −   rS(n) ⎞

(m + 1)Vi (x) ⎜ ⎟ (1 + ◦(1))⎠ .  ⎝1 −  rS(n)

Therefore, we get rN − nF (m+1) (an x + bn ) = An + Bn (m + 1)Vi (x)(1 + ◦(1)), √ rN where  rN − n An =

 rS(n)

(m+1)

S(n) √ rN

r  N rN − n = √ n =0 rN

and  nr S(n) m+1/2 Bn = m+1 √ = S (n) rN

n

 rS(n)

m+1

S(n)   rN rS(n)

n = rN



 rS(n)

m+1

S(n)

=

n rN = 1. × rN n

√ Thus, we get (rN − nF (m+1) (an x + bn ))/ rN → (m + 1)Vi (x), as n → ∞. Therefore, an application of Lemma 2.2 yields the first claim of the theorem. Let us now turn to the converse, i.e., let (2.17) be satisfied for some normalizing √ constants an > 0 and bn . Then, in view of Lemma 2.2, we get (rN − nF (m+1) (an x + bn ))/ rN → V (x), as n → ∞. √ This yields F (an x + bn ) = (rN /n)1/(m+1) (1 − (1/(m + 1) rN )V (x)(1 + ◦(1))). Consequently, we get  rS(n)

 r 1/(m+1) N  r − S(n) − S(n)F (an x + bn ) S(n) S(n)  rN 1/(m+1) V (x) n = + (1 + ◦(1))      n m+1 rS(n) rS(n) rN rS(n) =

 − rN rS(n) rN V (x) V (x) + (1 + ◦(1)) = (1 + ◦(1)).   m+1 rS(n) rN2 m + 1

Therefore, rn − nF (n x + n ) V (x) −→   m +1 rn

as n → ∞,

where S −1 (n) = an and S −1 (n) = bn . This completes the proof.



H.M. Barakat / Journal of Statistical Planning and Inference 137 (2007) 1 – 11

11

ˇ n > − 1. Corollary 2.7. Let m1 = m2 = · · · = mn−rn = m and let condition C1 of Lemma 1.1 be satisfied, such that M Furthermore, let rn be a variable rank sequence defined by rn = rS −1 (N  ) , ˇ n − m)rn − (M ˇ n + 1))/m + 1. Then, there exist where S  (n) = rN  /(rN  /n)1/(m+1) , and N  = n + (k/m + 1) + ((M ˜ (x) if, and only normalizing constants an > 0 and bn for which (2.17) is satisfied for some nondegenerate df (m,k) ˜ (0,1)

w

n

n

if, there are normalizing constants n > 0 and n for which n−r  +1:n (n x + n ) −→ N(Vi (x)), i ∈ {1, 2, 3}. In ˜ (x) must has the form this case an and bn may be chosen such that an = S  (n) and bn = S  (n) . Moreover, (m,k) N((m + 1)Vi (x)).

Proof. The proof follows as in the proof of Corollaries 2.2 and 2.5.



ˇ = −1. Then we get the following interesting result: Corollary 2.8. Let m1 = m2 = · · · = mn−rn = m > − 1, and let M (m,k) ˜

w

(m,k) ˜

w

n−rn +1:n (an x + bn ) = 1:n (an x + bn ) −→ 1 −  n

n

k m+1

(m+1)

(Ui,

(x)),

where an = (n) and bn = (n) . Moreover, n and n are the norming constants defined in (2.5). Acknowledgements The author would like to thank the Associate Editor as well as the anonymous referees for constructive suggestions leading to improvement of the representation of the paper. The author would like also to thank Professor Cramer for providing his extensive work, Cramer (2003). References Arnold, B.C., Balakrishnan, N., Nagaraja, H.N., 1992. A First Course in Order Statistics. Wiley, New York. Balakrishnan, N., Aggarwala, R., 2000. Progressive Censoring. Theory, Methods, and Applications. Birkhäuser, Boston. Barakat, H.M., 1997. Asymptotic properties of bivariate random extremes. J. Statist. Plann. Inference 61, 203–217. Christoph, G., Falk, M., 1996. A note on domain of attraction of p-max stable laws. Statist. Probability Lett. 28, 279–284. Cramer, E., 2003. Contributions to generalized order statistics, Habililationsschrift, Reprint, University of Oldenburg. de Haan, L., 1970. On regular variation and its application to the week convergence of sample extremes. Math. Centre Tracts, 32, Amsterdam. Kamps, U., 1995. A Concept of Generalized Order Statistics. Teubner, Stuttgart. Leadbetter, M.R., Lindgren, G., Rootzén, H., 1983. Extremes and Related Properties of Random Sequences and Processes. Springer, New York. Nasri-Roudsari, D., 1996. Extreme value theory of generalized order statistics. J. Statist. Plann. Inference 55, 281–297. Nasri-Roudsari, D., 1999. Limit distributions of generalized order statistics under power normalization. Comm. Statist. Theory Methods 28, 1379–1389. Nasri-Roudsari, D., Cramer, E., 1999. On the convergence rates of extreme generalized order statistics. Extremes 2, 421–447. Smirnov, N.V., 1952. Limit distribution for terms of a variational series. Amer. Math. Soc. Transl. Ser. 1, 11, 82–143. Xie, S., 1997. Maxima with random indexes. Chinese Sci. Bull. 42 (21), 1767–1771. Wu, C.Y., 1966. The types of limit distributions for terms of variational series. Sci. Sinica 15, 749–762.