Estimation of parameters of the half-logistic distribution using generalized ranked set sampling

Estimation of parameters of the half-logistic distribution using generalized ranked set sampling

Computational Statistics & Data Analysis 33 (2000) 1–13 www.elsevier.com/locate/csda Estimation of parameters of the half-logistic distribution using...

96KB Sizes 1 Downloads 95 Views

Computational Statistics & Data Analysis 33 (2000) 1–13 www.elsevier.com/locate/csda

Estimation of parameters of the half-logistic distribution using generalized ranked set sampling A. Adatia Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, Canada S4S 0A2 Received 1 June 1998; received in revised form 1 April 1999

Abstract The ranked-set sampling technique has been generalized so that more ecient estimators may be obtained. The generalized ranked-set sampling technique is applied in the estimation of location and scale parameters of the half-logistic distribution. Three estimators are proposed. These are minimum variance unbiased estimators, simple estimators and ranked-set sample estimators. Coecients, variances, covariances, generalized variances and relative eciencies are tabulated. The estimators are compared c 2000 Elsevier Science to the best linear unbiased estimators of the location and scale parameters. B.V. All rights reserved. Keywords: Half-logistic distribution; Order statistics; Linear estimation; Generalized ranked-set sampling; Ranked-set sampling

1. Introduction In applied statistics, experimenters often encounter situations where the actual measurements of the sample observations cannot be directly measured due to cost, time and other factors but can be ranked relatively easily. In such situations, McIntyre (1952) advocated the use of ranked-set sampling. He applied the rank-set sampling technique in assessing the yields of pasture plots without actually carrying out the time-consuming process of mowing and weighing the hay for a large number of plots. Halls and Dell (1966) applied the ranked-set sampling procedure in estimating forage yields. Stokes and Sager (1988) discussed the application of this procedure to estimating volumes of trees in a forest. Takahasi and Wakimoto (1968) and Dell c 2000 Elsevier Science B.V. All rights reserved. 0167-9473/00/$ - see front matter PII: S 0 1 6 7 - 9 4 7 3 ( 9 9 ) 0 0 0 3 5 - 3

2

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

and Cutter (1972) studied theoretical aspects of this technique on the assumption of perfect judgment ranking and imperfect judgment ranking, respectively. Dell and Cutter (1972) showed that the variance of the ranked-set sample mean is never larger than the variance of the random sample mean, whether or not judgment ranking is perfect. Patil et al. (1993) studied the same technique when the sample was from a nite population. Bohn (1996) discussed the application of ranked-set sampling in nonparametric procedures. In this paper the ranked-set sampling technique has been generalized so that more ecient estimators may be obtained. In generalized ranked-set sampling, rst a set of N elements is randomly selected from a given population. The sample is ordered without making actual measurements. The unit identi ed with the N1 rank is accurately measured. Next, a second set of N elements is randomly selected from the population. Again the units are ordered and the unit with the N2 rank is accurately measured. The process is continued until N th set of N observations is selected. The units are again ordered and the unit with NN rank is accurately measured. The ordered sample of the N sets can be represented as follows: Set 1 X(11) Set 2 X(21) · · · · Set N X(N 1)

X(12) X(22) · · X(N 2)

::: ::: ::: ::: :::

X(1N ) X(2N ) · · X(NN )

The generalized ranked-set sample of size N consists of units which are accurately measured i.e. (X(1N1 ) ; X(2N2 ) ; : : : ; X(NNN ) ) where 1 ≤ Ni ≤ N and i = 1; 2; : : : ; N . The generalized ranked-set sample actually includes the usual ranked-set sample which is obtained when N1 = 1, N2 = 2, NN = N . The half-logistic distribution was introduced by Balakrishnan (1985) as a lifetesting model with increasing hazard rate. Later, Balakrishnan and Puthenpura (1986) considered data representing failure times for a speci c type of electrical insulation subjected to continuously increasing voltage stress. They demonstrated that for such applications, the half-logistic distribution was an appropriate failure-time model. In this paper the half-logistic distribution has been considered due to its possible applications in applied statistics. Let the random variable X have a half-logistic distribution with probability density function (f(x)) and cumulative distribution function (F(x)): f(x) =

2 exp{−(x − )=} ;  [1 + exp{−(x − )=}]2

F(x) =

[1 − exp{−(x − )=}] ; [1 + exp{−(x − )=}]

x ¿ ;  ¿ 0;

where  and  are the location and the scale parameters. The coecients of the best linear unbiased estimators (BLUEs) for the location and scale parameters based on complete and censored samples have been tabulated by Balakrishnan and Puthenpura (1986) and Balakrishnan and Wong (1994). The

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

3

approximate maximum likelihood estimators for the location and scale parameters are discussed by Balakrishnan and Wong (1991). The approximate BLUEs of the parameters are discussed by Adatia (1997). The generalized ranked-set sampling technique is applied in the estimation of the location and scale parameters of the half-logistic distribution. Three estimators are proposed. These are minimum variance unbiased estimators (MVUE), simple estimators (SE) and ranked-set sample estimators (RSS). Coecients, variances, covariances, generalized variances and relative eciencies are derived. The estimators are compared to the best linear unbiased estimators (BLUE) of the parameters.

2. Estimators 2.1. Estimators based on generalized ranked-set sampling Let Z(iNi ) = (X(iNi ) − )=; (iNi ) ≡ E(Z(iNi ) ); !iNi Ni ≡ Var(Z(iNi ) );

i = 1; 2; : : : ; N:

Therefore E(X(iNi ) ) ≡  +  (iNi ) and Var(X(iNi ) ) ≡ !iNi Ni 2 . Let S = ( (1N1 ) ; (2N2 ) ; : : : ; (NNN ) )T where T implies the transpose 1T = (1; : : : ; 1); S = {N1 ; N2 ; : : : ; NN }; XS = (X(1N1 ) ; X(2N2 ) ; : : : ; X(NNN ) )T and Var(XS ) = S 2 where S is a N × N diagonal matrix with !iNi Ni as the (i; i)th element. Balakrishnan (1985) has tabulated the values of (iNi ) and !iNi Ni for the half-logistic distribution. Then E(XS ) = 1 +  S = AS  where ATS =



1

1

(1N1 ) (2N2 )

::: 1 : : : (NNN )



and

 T = (; ):

Least-squares estimator of  is obtained by applying the Gauss and Markov theorem (Balakrishnan and Cohen, 1991). Then ˆS = (ATS S−1 AS )−1 ATS S−1 XS

and

where superscript −1 implies inverse.

Var(ˆS ) = (ATS S−1 AS )−1 2 ;

4

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

Therefore, based on generalized ranked-set sample XS with S = {N1 ; N2 ; : : : ; NN } the estimators are ˆS =

ˆS =

N X

N X (T1S − (iNi ) T3S )

Ai (N )S X(iNi ) =

i=1 N X

i=1

Ci (N )S X(iNi ) =

i=1

2 (T1S T2S − T3S )

N X ( (iNi ) T2S − T3S ) i=1

2 (T1S T2S − T3S )

(X(iNi ) =!iNi Ni );

(X(iNi ) =!iNi Ni );

where Ai (N )S represents the ith coecient for estimating the location parameter based on sample size of N and ranks S, Ci (N )S represents the ith coecient for estimating the scale parameter based on sample size of N and ranks S: T1S =

( ST

·

S−1

· S ) =

T2S = (1T · S−1 · 1) =

N X

i=1 N X

2 (iN =!iNi Ni ; i)

1=!iNi Ni ;

i=1 N X

T3S = (1T · S−1 · S ) =

(iNi ) =!iNi Ni :

i=1

The variances and covariances of these estimators are given by 2 ); Var(ˆS ) = T1S 2 =(T1S T2S − T3S 2 ); Var(ˆS ) = T2S 2 =(T1S T2S − T3S 2 ) Cov(ˆS ; ˆS ) = −T3S 2 =(T1S T2S − T3S

and the generalized variance is GVar(ˆS ; ˆS ) = Var(ˆS ) Var(ˆS ) − (Cov(ˆS ; ˆS ))2 : 2.2. Minimum variance unbiased estimators (MVUE) Minimum variance unbiased estimators are obtained from the generalized ranked-set estimators when all possible choices of S are considered. The best choice of S is the one which gives the minimum generalized variance of the estimators. This S is denoted by SMVUE . The estimators are denoted by ˆMVUE and ˆMVUE . Table 1 provides ranks and coecients for computing ˆMVUE and ˆMVUE for N = 2; 3; : : : ; 11. The ranks are obtained by computing generalized variance for each choice of S. Ranks, which result in minimum generalized variance of the estimators, are identi ed as SMVUE . Larger sample sizes (N ¿ 11) have not been considered due to considerable amount of computer time required. Table 2 provides the variances, covariances, generalized variances and relative eciencies of the estimators for N = 2; 3; : : : ; 11. The relative eciencies are relative to the BLUE of  and . The BLUE is based on one ordered sample of size N , measuring all N observations.

Table 1 Coecients Ai (N )MVUE and Ci (N )MVUE for computing and ˆMVUE and ˆMVUE

2 {1; 2} 3 4 5 6 7 8 9 10 11

1

2

3

4

5

6

7

8

9

10

11

Ai (N )MVUE 1.62945 −0:62945 Ci (N )MVUE −0:81472 0.81472 {1; 1; 3} Ai (N )MVUE 0.64806 0.64806 −0:29611 Ci (N )MVUE −0:27157 −0:27157 0.54315 {1; 1; 4; 4} Ai (N )MVUE 0.59445 0.59445 −0:09445 −0:09445 Ci (N )MVUE −0:22292 −0:22292 0.22292 0.22292 {1; 1; 1; 5; 5} Ai (N )MVUE 0.37894 0.37894 0.37894 −0:06841 −0:06841 Ci (N )MVUE −0:13129 −0:13129 −0:13129 0.19693 0.19693 {1; 1; 1; 6; 6; 6} Ai (N )MVUE 0.36878 0.36878 0.36878 −0:03545 −0:03545 −0:03545 Ci (N )MVUE −0:12025 −0:12025 −0:12025 0.12025 0.12025 0.12025 {1; 1; 1; 1; 6; 6; 6} Ai (N )MVUE 0.28384 0.28384 0.28384 0.28384 −0:04512 −0:04512 −0:04512 Ci (N )MVUE −0:13208 −0:13208 −0:13208 −0:13208 0.17611 0.17611 0.17611 {1; 1; 1; 1; 7; 7; 7; 7} Ai (N )MVUE 0.27745 0.27745 0.27745 0.27745 −0:02745 −0:02745 −0:02745 −0:02745 Ci (N )MVUE −0:12114 −0:12114 −0:12114 −0:12114 0.12114 0.12114 0.12114 0.12114 {1; 1; 1; 1; 1; 8; 8; 8; 8} Ai (N )MVUE 0.21839 0.21839 0.21839 0.21839 0.21839 −0:02298 −0:02298 −0:02298 −0:02298 Ci (N )MVUE −0:09045 −0:09045 −0:09045 −0:09045 −0:09045 0.11306 0.11306 0.11306 0.11306 {1; 1; 1; 1; 1; 9; 9; 9; 9; 9} Ai (N )MVUE 0.21575 0.21575 0.21575 0.21575 0.21575 −0:01575 −0:01575 −0:01575 −0:01575 −0:01575 Ci (N )MVUE −0:08545 −0:08545 −0:08545 −0:08545 −0:08545 0.08545 0.08545 0.08545 0.08545 0.08545 {1; 1; 1; 1; 1; 9; 9; 9; 9; 10; 10} Ai (N )MVUE 0.21617 0.21617 0.21617 0.21617 0.21617 −0:01369 −0:01369 −0:01369 −0:01369 −0:01306 −0:01306 Ci (N )MVUE −0:09693 −0:09693 −0:09693 −0:09693 −0:09693 0.08688 0.08688 0.08688 0.08688 0.06856 0.06856

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

N SMVUE

5

Table 2 Variances, covariances, generalized variances and relative eciencies for minimum variance unbiased estimators N

Var(ˆSE ) Var(ˆRSS ) Var(ˆMVUE ) Var(ˆMVUE ) Cov(ˆMVUE ;ˆMVUE ) GVar(ˆMVUE ;ˆMVUE ) Var(ˆBLUE ) Var(ˆBLUE ) GVar(ˆBLUE ;ˆBLUE ) Var(ˆSE ) GVar(ˆSE ;ˆSE ) Var(ˆRSS ) GVar(ˆRSS ;ˆRSS ) Var(ˆMVUE ) Var(ˆMVUE ) GVar(ˆMVUE ;ˆMVUE ) Var(ˆMVUE ) Var(ˆMVUE ) GVar(ˆMVUE ;ˆMVUE ) Var(ˆMVUE ) Var(ˆMVUE ) GVar(ˆMVUE ;ˆMVUE ) 2 2 2 4

2 3 4 5 6 7 8 9 10 11

1.77424 0.33033 0.12881 0.05726 0.03542 0.02160 0.01538 0.00984 0.00768 0.00635

1.31616 0.50412 0.17464 0.13116 0.07389 0.06227 0.03967 0.03392 0.02433 0.01881

−1:37343 −0:33641 −0:10553 −0:05837 −0:03041 −0:02324 −0:01429 −0:01012 −0:00701 −0:00559

0.44889 0.05336 0.01136 0.00410 0.00169 0.00081 0.00041 0.00023 0.00014 0.00009

0.56456 1.14325 1.59504 2.28972 2.58676 3.14532 3.41454 4.25833 4.45861 4.49503

0.62011 0.78705 1.48658 1.46466 2.05841 2.01875 2.69827 2.74600 3.38695 3.92771

0.79025 1.66652 3.18874 4.50175 6.34833 8.47525 11.36668 14.13515 17.46763 20.67373

1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.87244

1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.18554

1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000

1.00000 1.57615 1.84251 2.31322 2.34944 2.61762 2.63998 3.09068 3.06248 2.94078

1.00000 0.92917 1.38470 1.12462 1.34184 1.14146 1.34522 1.22284 1.36148 1.43771

1.00000 1.28008 1.68252 1.76021 1.93570 2.09039 2.33103 2.46232 2.62925 2.72668

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

7

2.3. Simple estimators (SE) The simple estimators are obtained from the generalized ranked-set estimators as follows: When sample size is even, S = {N1 ; N2 ; : : : ; NN } where N1 = N2 = · · · = NN=2 = k1 and NN=2+1 = · · · = NN = k2 then S = {k1 ; : : : ; k1 ; k2 ; : : : ; k2 }: Then from generalized ranked-set estimators, the simple estimators for even sample sizes are ˆSE =

ˆSE =

N=2 X t=1 N=2 X

N X

Ai (N )SE X(ik1 ) +

Aj (N )SE X( jk2 ) ;

j=N=2+1 N X

Ci (N )SE X(ik1 ) +

i=1

Cj (N )SE X( jk2 ) ;

j=N=2+1

where Ai (N )SE and Aj (N )SE the location parameter based represent the ith and the jth sample size of N and ranks

represent the ith and the jth coecients for estimating on sample size of N and ranks SE Ci (N )SE and Cj (N )SE coecients for estimating the scale parameter based on SE:

Ai (N )SE = 2 (ik2 ) (N ( (ik2 ) − (ik1 ) ));

i = 1; : : : ; N=2;

Aj (N )SE = −2 ( jk1 ) =(N ( ( jk2 ) − ( jk1 ) ));

j = N=2 + 1; : : : ; N;

Ci (N )SE = −2=(N ( (ik2 ) − (ik1 ) ));

i = 1; : : : ; N=2;

Cj (N )SE = 2=(N ( ( jk2 ) − ( jk1 ) ));

j = N=2 + 1; : : : ; N;

SE = {k1 ; : : : ; k1 ; k2 ; : : : ; k2 }: Var(ˆSE ) =

N=2 X i=1

2 4 (ik !ik1 k1 =(N ( (ik2 ) − (ik1 ) ))2 2) N X

+

j=N=2+1

Var(ˆSE ) =

N=2 X

4 (2 jk1 ) !jk2 k2 =(N ( ( jk2 ) − ( jk1 ) ))2 ;

4!ik1 k1 =(N ( (ik2 ) − (ik1 ) ))2 +

i=1

N X

4!jk2 k2 =(N ( ( jk2 ) − ( jk1 ) ))2 ;

j=N=2+1

Cov(ˆSE ; ˆSE ) = −4 −4

N=2 X

(ik2 ) !ik1 k1 =(N ( (ik2 ) − (ik1 ) ))2

i=1 N X

( jk1 ) !jk2 k2 =(N ( ( jk2 ) − ( jk1 ) ))2 ;

j=N=2+1

GVar(ˆSE ; ˆSE ) = Var(ˆSE )Var(ˆSE ) − (Cov(ˆSE ; ˆSE ))2 :

8

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

When sample size is odd S = {N1 ; N2 ; : : : ; NN } where N1 = N2 = · · · = N(N +1)=2 = k1 and N(N +1)=2+1 = · · · = NN = k2 then S = {k1 ; : : : ; k1 ; k1 ; k2 ; : : : ; k2 }. Then from generalized ranked-set estimators, the simple estimators for odd sample sizes are (N −1)=2+1

ˆSE =

X

Ai (N )SE X(ik1 ) +

i=1

Aj (N )SE X( jk2 ) ;

j=(N −1)=2+2

(N −1)=2+1

ˆSE =

N X

X

Ci (N )SE X(ik1 ) +

i=1

N X

Cj (N )SE X( jk2 ) ;

j=(N −1)=2+2

where Ai (N )SE and Aj (N )SE the location parameter based represent the ith and the jth sample size of N and ranks

represent the ith and the jth coecients for estimating on sample size of N and ranks SE Ci (N )SE and Cj (N )SE coecients for estimating the scale parameter based on SE

Ai (N )SE = 2 (ik2 ) =((N + 1)( (ik2 ) − (ik1 ) ));

i = 1; : : : ; (N − 1)=2 + 1;

Aj (N )SE = −2 ( jk1 ) =((N − 1)( ( jk2 ) − ( jk1 ) )); Ci (N )SE = −2=((N + 1)( (ik2 ) − (ik1 ) )); Cj (N )SE = 2=((N − 1)( ( jk2 ) − (k1 ) ));

j = (N − 1)=2 + 2; : : : ; N;

i = 1; : : : ; (N − 1)=2 + 1; j = (N − 1)=2 + 2; : : : ; N;

SE = {k1 ; : : : ; k1 ; k1 ; k2 ; : : : ; k2 }: (N −1)=2+1

X

Var(ˆSE ) = 4

2 (ik !ik1 k1 =((N + 1)( (ik2 ) − (ik1 ) ))2 2)

i=1 N X

+4

i=(N −1)=2+2 (N −1)=2+1

X

Var(ˆSE ) =

(2 jk1 ) !jk2 k2 =((N − 1)( ( jk2 ) − ( jk1 ) ))2 ;

4!ik1 k1 =((N + 1)( (ik2 ) − (ik1 ) ))2

i=1

+

N X

4!jk2 k2 =((N − 1)=( ( jk2 ) − ( jk1 ) ))2 ;

j=(N −1)=2+2 (N −1)=2+1

Cov(ˆSE ; ˆSE ) = − −

X

4 (ik2 ) !ik1 k1 =((N + 1)( (ik2 ) − (ik1 ) ))2

i=1 N X

4 ( jk1 ) !jk2 k2 =((N − 1)( ( jk2 ) − ( jk1 ) ))2 ;

j=(N −1)=2+2

GVar(ˆSE ; ˆSE ) = Var(ˆSE )Var(ˆSE ) − (Cov(ˆSE ; ˆSE ))2 :

Table 3 Coecients, variances, covariances, generalized variances and relative eciencies of simple estimators Ci (N )SE

Cj (N )SE

Var(ˆSE ) ˆ2

Var(ˆSE ) 2

Cov(ˆSE ;ˆSE ) 2

GVar(ˆSE ;ˆSE ) 4

Var(ˆBLUE ) Var(ˆSE )

Var(ˆBLUE ) Var(ˆSE )

GVar(ˆBLUE ;ˆBLUE ) GVar(ˆSE ;ˆSE )

Var(ˆRSS ) Var(ˆSE )

Var(ˆRSS ) Var(ˆSE )

GVar(ˆRSS ;ˆRSS ) GVar(ˆSE ;ˆSE )

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

−0:81472 −0:27157 −0:22292 −0:13129 −0:12025 −0:13208 −0:12114 −0:09045 −0:08545 −0:08733 −0:08266 −0:06758 −0:06483 −0:05469 −0:06264 −0:05372 −0:05200 −0:04544 −0:04423

0.81472 0.54315 0.22292 0.19693 0.12025 0.17611 0.12114 0.11306 0.08545 0.10480 0.08266 0.07884 0.06484 0.06250 0.06265 0.06043 0.05200 0.05049 0.04423

1.77424 0.33033 0.12881 0.05726 0.03542 0.02160 0.01538 0.00984 0.00768 0.00554 0.00452 0.00332 0.00281 0.00216 0.00191 0.00151 0.00133 0.00108 0.00097

1.31616 0.50412 0.17464 0.13116 0.07389 0.06227 0.03967 0.03392 0.02433 0.02230 0.01674 0.01507 0.01193 0.01100 0.00913 0.00844 0.00705 0.00661 0.00565

−1:37343 −0:33641 −0:10553 −0:05837 −0:03041 −0:02324 −0:01429 −0:01012 −0:00701 −0:00594 −0:00437 −0:00342 −0:00265 −0:00217 −0:00188 −0:00155 −0:00128 −0:00109 −0:00091

0.4488905 0.0533566 0.0113586 0.0041026 0.0016928 0.0008053 0.0004060 0.0002314 0.0001377 0.0000881 0.0000566 0.0000383 0.0000265 0.0000191 0.0000139 0.0000103 0.0000077 0.0000060 0.0000046

0.56456 1.14325 1.59504 2.28972 2.58676 3.14532 3.41454 4.25833 4.45861 5.15330 5.34320 6.23923 6.39095 7.29727 7.30458 8.22694 8.34918 9.27764 9.38247

0.62011 0.78705 1.48658 1.46466 2.05841 2.01875 2.69827 2.74600 3.38695 3.31299 3.99803 4.05868 4.71993 4.73876 5.32133 5.38484 6.05770 6.08843 6.74242

0.79025 1.66652 3.18874 4.50175 6.34833 8.47525 11.36668 14.13515 17.46763 20.66103 24.97323 29.18648 33.97965 38.64637 43.98828 49.64329 55.88704 62.01987 68.72167

1.00000 1.57615 1.84251 2.31322 2.34944 2.61762 2.63998 3.09068 3.06248 3.37145 3.34704 3.75835 3.71545 4.10696 3.99038 4.37228 4.32541 4.69349 4.64219

1.00000 0.92917 1.38470 1.12462 1.34184 1.14146 1.34522 1.22284 1.36148 1.21270 1.34248 1.25807 1.35794 1.27145 1.33709 1.27185 1.34938 1.28290 1.34753

1.00000 1.28008 1.68252 1.76021 1.93570 2.09039 2.33103 2.46232 2.62925 2.72501 2.91976 3.05467 3.21031 3.31988 3.45761 3.59033 3.73727 3.85167 3.97919



1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

2 3 4 5 6 6 7 8 9 9 10 11 12 13 13 14 15 16 17

1.62945 0.64806 0.59445 0.37894 0.36878 0.28384 0.27745 0.21839 0.21575 0.18139 0.17951 0.15260 0.15157 0.13188 0.13241 0.11711 0.11661 0.10456 0.10423

−0:62945 −0:29611 −0:09445 −0:06841 −0:03545 −0:04512 −0:02745 −0:02298 −0:01575 −0:01767 −0:01284 −0:01136 −0:00871 −0:00787 −0:00741 −0:00675 −0:00550 −0:00507 −0:00423

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

N ∗ k1 k2 Ai (N )SE Aj (N )SE

Note: When N is even 1 ≤ i ≤ N=2 and N=2 + 1 ≤ j ≤ N . When N is odd 1 ≤ i ≤ N (N + 1)=2 and (N + 3)=2 ≤ j ≤ N .

9

10

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

Table 3 provides ranks, coecients, variances, covariances, generalized variances and relative eciencies of the estimators ˆSE and ˆSE for N = 2; 3; : : : ; 20. The ranks were obtained by computing generalized variance for each choice of S, such that by choosing di erent values of k1 and k2 ; S = {k1 ; : : : ; k1 ; k2 ; : : : ; k2 }. Ranks, which resulted in minimum generalized variance of the estimators, are identi ed as SSE . The relative eciencies are relative to the BLUE of  and . The BLUE of  and  are based on all ordered samples of size N . 2.4. Ranked-set sample estimators (RSS) The ranked-set estimators (RSS) for  and  are obtained from the generalized ranked-set estimators when S = {1; 2; : : : ; N }. This choice of ranks is denoted by SRSS . These estimators are denoted by ˆRSS and ˆRSS ; respectively. Table 4 provides coecients for computing ˆRSS and ˆRSS for N = 2; 3; : : : ; 15. Table 5 provides the variances, covariances, generalized variances and relative eciencies of the estimators for N =2; 3; : : : ; 15: The relative eciencies are relative to the BLUE of  and . The BLUE is based on one ordered sample of size N , measuring all N observations.

3. Comparisons In this section comparison has been made between minimum variance unbiased estimators (MVUE), simple estimators (SE), ranked-set sample estimators (RSS) and the best linear unbiased estimators (BLUE). When comparing generalized variances, Table 2 shows that MVUE are more ecient than BLUE and RSS for N ≥ 3. MVUE and SE have the same eciency for N ≤ 10. Table 3 shows that SE are more ecient than BLUE and RSS for N ≥ 3. Table 5 shows that RSS are more ecient than BLUE for N ≥ 3. When comparing the variances of the estimators of the location parameter, Table 2 shows that ˆMVUE is more ecient than ˆBLUE and ˆRSS for N ≥ 3. And ˆMVUE and ˆSE have the same eciency for N ≤ 10. Table 3 shows that ˆSE is more ecient than ˆBLUE and ˆRSS for N ≥ 3. Table 5 shows that ˆRSS is more ecient than ˆBLUE for N ≥ 6. When comparing the variances of the estimators of the scale parameter, Table 2 shows that is ˆ MVUE more ecient than ˆBLUE and ˆRSS for N ≥ 4. And ˆMVUE and ˆSE have the same eciency for N ≤ 10. Table 3 shows that ˆSE is more ecient than ˆBLUE and ˆRSS for N ≥ 4. ˆBLUE Table 5 shows that ˆRSS is more ecient than ˆBLUE for N ≥ 4.

4. Conclusion The minimum variance unbiased estimators (MVUE), the simple estimators (SE) and the ranked-set sample estimators (RSS) are all more ecient than the best linear unbiased estimators (BLUE). The MVUE are the best estimators. However,

Table 4 Coecients Ai (N )RSS and Ci (N )RSS for computing ˆRSS and ˆRSS N i

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

2 Ai (N )RSS 1.62945 −0.62945 Ci (N )RSS −0.81472 0.81472 3 Ai (N )RSS 1.28885 0.01154 −0.30039 Ci (N )RSS −0.72183 0.28386 0.43796 4 Ai (N )RSS 1.09163 0.19434 −0.10995 −0.17603 Ci (N )RSS −0.65879 0.07965 0.30111 0.27802 5 Ai (N )RSS 0.96219 0.25719 0.00418 −0.10802 −0.11555 Ci (N )RSS −0.61211 −0.01725 0.18567 0.25010 0.19360 6 Ai (N )RSS 0.87015 0.27943 0.06573 −0.04244 −0.09130 −0.08157 Ci (N )RSS −0.57572 −0.06959 0.10979 0.18929 0.20308 0.14314 7 Ai (N )RSS 0.80101 0.28560 0.09997 0.00220 −0.05284 −0.07535 −0.06059 Ci (N )RSS −0.54632 −0.10029 0.05953 0.13824 0.17213 0.16629 0.11042 8 Ai (N )RSS 0.74692 0.28485 0.11967 0.03164 −0.02166 −0.05232 −0.06236 −0.04674 Ci (N )RSS −0.52196 −0.11932 0.02508 0.09937 0.13879 0.15203 0.13810 0.08792 9 Ai (N )RSS 0.70331 0.28101 0.13121 0.05125 0.00142 −0.03052 −0.04845 −0.05212 −0.03712 Ci (N )RSS −0.50136 −0.13155 0.00068 0.07003 0.11012 0.13058 0.13343 0.11632 0.07176 10 Ai (N )RSS 0.66730 0.27584 0.13796 0.06455 0.01826 −0.01281 −0.03308 −0.04379 −0.04404 −0.03018 Ci (N )RSS −0.48365 −0.13960 −0.01712 0.04766 0.08669 0.10976 0.11996 0.11729 0.09926 0.05974 11 Ai (N )RSS 0.63697 0.27018 0.14179 0.07371 0.03060 0.00103 −0.01953 −0.03287 −0.03924 −0.03763 −0.02500 Ci (N )RSS −0.46822 −0.14496 −0.03042 0.03033 0.06775 0.09148 0.10499 0.10924 0.10357 0.08569 0.05055 12 Ai (N )RSS 0.61102 0.26445 0.14379 0.08008 0.03974 0.01177 −0.00831 −0.02248 −0.03144 −0.03511 −0.03248 −0.02104 Ci (N )RSS −0.45464 −0.14852 −0.04056 0.01669 0.05239 0.07591 0.09086 0.09859 0.09923 0.09195 0.07473 0.04336 13 Ai (N )RSS 0.58853 0.25885 0.14460 0.08455 0.04659 0.02013 0.00081 −0.01341 −0.02343 −0.02953 −0.03145 −0.02829 −0.01794 Ci (N )RSS −0.44255 −0.15085 −0.04843 0.00581 0.03985 0.06277 0.07822 0.08773 0.09178 0.09019 0.08211 0.06576 0.03762 14 Ai (N )RSS 0.56881 0.25349 0.14466 0.08769 0.05175 0.02667 0.00818 −0.00574 −0.01611 −0.02333 −0.02749 −0.02826 −0.02484 −0.01548 Ci (N )RSS −0.43171 −0.15233 −0.05462 −0.00300 0.02950 0.05169 0.06716 0.07754 0.08349 0.08513 0.08213 0.07372 0.05833 0.03297 15 Ai (N )RSS 0.55135 0.24841 0.14420 0.08986 0.05569 0.03183 0.01416 0.00066 −0.00968 −0.01740 −0.02267 −0.02547 −0.02548 −0.02198 −0.01349 Ci (N )RSS −0.42193 −0.15319 −0.05955 −0.01021 0.02089 0.04230 0.05756 0.06831 0.07531 0.07885 0.07886 0.07500 0.06654 0.05210 0.02914

12

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

Table 5 Variances, covariances, generalized variances and relative eciencies for ranked-set sampling estimators N

Var(ˆRSS ) 2

Var(ˆRSS ) 2

Cov(ˆRSS ;ˆRSS ) 2

GVar(ˆRSS ;ˆRSS ) 4

Var(ˆBLUE ) Var(ˆRSS )

Var(ˆBLUE ) Var(ˆRSS )

GVar(ˆBLUE ;ˆBLUE ) GVar(ˆRSS ;ˆRSS )

2 3 4 5 6 7 8 9 10 11 12 13 14 15

1.77424 0.52065 0.23733 0.13246 0.08323 0.05655 0.04062 0.03041 0.02352 0.01867 0.01513 0.01248 0.01046 0.00887

1.31616 0.46841 0.24183 0.14750 0.09915 0.07108 0.05336 0.04148 0.03312 0.02704 0.02247 0.01895 0.01620 0.01399

−1.37343 −0.41902 −0.19566 −0.11098 −0.07053 −0.04833 −0.03494 −0.02630 −0.02042 −0.01627 −0.01322 −0.01093 −0.00918 −0.00780

0.44889 0.06830 0.01911 0.00722 0.00328 0.00168 0.00095 0.00057 0.00036 0.00024 0.00017 0.00012 0.00009 0.00006

0.56456 0.72534 0.86569 0.98984 1.10101 1.20160 1.29340 1.37780 1.45588 1.52851 1.59640 1.66010 1.72010 1.77680

0.62011 0.84705 1.07357 1.30237 1.53403 1.76856 2.00582 2.24559 2.48769 2.73191 2.97810 3.22611 3.47580 3.72706

0.79025 1.30189 1.89522 2.55751 3.27960 4.05439 4.87625 5.74058 6.64357 7.58201 8.55317 9.55470 10.58453 11.64088

the coecients for larger sample sizes (N ¿ 11) require considerable amount of computational time. Simple estimators (SE) are more ecient than the ranked-set sample estimators (RSS) as well as the best linear unbiased estimators (BLUE). For small sample sizes they are as ecient as the MVUE. Moreover, the coecients, variances and covariances of the SE are easy to obtain. Simple estimators (SE) are the best estimators for estimating the parameters of the half-logistic distribution when the actual measurements of the sample observations are dicult to measure but are easily ranked. References Adatia, A., 1997. Approximate BLUEs of the parameters of the half logistic distribution based on fairly large doubly censored samples. Comput. Statist. Data Anal. 24, 179–191. Balakrishnan, N., 1985. Order statistics from the half logistic distribution. J. Statist. Comput. Simulation 20, 287–309. Balakrishnan, N., Cohen, A.C., 1991. Order Statistics and Inference: Estimation Methods. Academic Press, Boston. Balakrishnan, N., Puthenpura, S., 1986. Best linear unbiased estimators of location and scale parameters of the half logistic distribution. J. Statist. Comput. Simulation 25, 193–204. Balakrishnan, N., Wong, K.H.T., 1994. Best linear unbiased estimation of location and scale parameters of the half logistic distribution based on Type II censored samples. American J. Math. Manag. Sci. 14, 53–102. Balakrishnan, N., Wong, K.H.T., 1991. Approximate MLEs for the location and scale parameters of the half logistic distribution with Type-II right censoring. IEEE Trans. Reliab. 40, 140 –145. Bohn, Lora L., 1996. A review of nonparametric rank-set sampling methodology. Commun. Statist. Theory Methods 25, 2675–2685. Dell, T.R., Cutter, J.L., 1972. Ranked-set sampling theory with order statistics background. Biometrics 28, 545–555.

A. Adatia / Computational Statistics & Data Analysis 33 (2000) 1–13

13

Halls, L.K., Dell, T.R., 1966. Trials of ranked-set sampling for forage yields. Forest Sci. 12, 22–26. McIntyre, G.A., 1952. A method of unbiased selective sampling, using ranked sets. Australian J. Agri. Res. 3, 385–390. Patil, G.P., Sinha, A.K., Taillie, C., 1993. Ranked set sampling from a nite population in the presence of a trend on a site. J. Appl. Statist. Sci. 1, 51– 65. Stokes, S.L., Sager, T.W., 1988. Characterization of a ranked-set sample with application to estimating distribution functions. J. Amer. Statist. Assoc. 83, 374 –381. Takahasi, K., Wakimoto, K., 1968. On unbiased estimates of the population mean based on the sample strati ed by means of ordering. Ann. Inst. Statist. Math. 20, 1–31.