CHAPTER
EXTROPY ESTIMATION IN RANKED SET SAMPLING WITH ITS APPLICATION IN TESTING UNIFORMITY
19
Ehsan Zamanzade1 and Mahdi Mahdizadeh2 1
Department of Statistics, University of Isfahan, Isfahan, Iran 2Department of Statistics, Hakim Sabzevari University, Sabzevari, Iran
19.1 INTRODUCTION There are situations in which obtaining exact values of sample units is difficult/expensive but ranking the sample units in a set of small size without referring to their precise values is easy/ cheap. In such situations, ranked set sampling (RSS) serves as an efficient alternative to simple random sampling (SRS). RSS was firstly introduced by McIntyre (1952) when he realized that it is hard and time-consuming to obtain exact measurements of the mean pasture yield because it requires harvesting the corps, but an expert can fairly rank some adjacent plots using eye inspection. Although RSS was first motivated by an agricultural problem, it soon found applications in other fields, including forestry (Halls and Dell, 1966), environmental monitoring (Kvam, 2003), medicine (Chen et al., 2005; Zamanzade and Mahdizadeh, 2017a), biometrics (Mahdizadeh and Zamanzade, 2017a), reliability (Mahdizadeh and Zamanzade, 2017b), and educational studies (Wang et al., 2016). To draw a ranked set sample, one first determines the set size H and a vector of in-stratum samH P mh is the total sample size. We then draw a simple ple sizes m 5 ðm1 ; . . .; mH Þ such that n 5 h51
random sample of size nH from the population of interest and randomly partitions them into n sets each of size H. Each set of size H is then ranked from smallest to largest. The ranking process in this step is done using any cheap method which does not require referring to exact measurements of the sample units. From the first m1 sets of size H, the sample units with smallest judgment rank are selected for actual measurements. From the next m2 sets of size H, the sample units with judgment rank 2 are selected for quantification. This process is continued until the sample units with judgment rank H are selected for quantification from the last mH sets of size H. The resulting ranked set sample is called unbalanced as the numbers of different judgment order statistics are not equal. A ranked set sample is called balanced if m1 5 . . . 5 mH 5 m, and the value of m in this case is called the cycle size.
Ranked Set Sampling. DOI: https://doi.org/10.1016/B978-0-12-815044-3.00019-8 Copyright © 2019 Elsevier Inc. All rights reserved.
259
260
CHAPTER 19 EXTROPY ESTIMATION IN RANKED SET SAMPLING
A ranked set sample, in its general form, is denoted by X½ij :i 5 1; . . .; H; j 5 1; . . .; mi , where X½ij is the jth measured unit with judgment rank i. The term “judgment rank” and the subscript ½: are used to indicate that the ranking process is done without observing actual values of the units in the set and thus it may be inaccurate and contains errors (imperfect ranking). If the ranking is perfect, then subscript ½: is replaced with ð:Þ, and the resulting ranked set sample is denoted by XðiÞj :i 5 1; . . .; H; j 5 1; . . .; mi . In this case, the distribution of XðiÞj is the same as the distribution of the ith order statistic from a sample of size H. Throughout this chapter, we assume that the ranking process is consistent, which means that the same ranking process is applied to all sets of size H. Under a consistent ranking process, it can be simply shown that the following identity holds F ðtÞ 5
H 1X F½h ; H h51
where F½h is the cumulative distribution function (CDF) of a sample unit with judgment rank h.
19.2 EXTROPY ESTIMATION USING A RANKED SET SAMPLE Let X be the variable of interest which is continuous with probability density function (pdf) f and cumulative distribution function (CDF) F. As a measure of uncertainty, entropy of the random variable of X is defined by Shannon (1948) as H ðf Þ 5 2
ð 1N
logðf ðxÞÞf ðxÞdx:
2N
Due to numerous applications of entropy in statistics, information theory, and engineering, the problem of nonparametric estimation of H ðf Þ has received considerable attention. Vasicek (1976) was the first to propose estimating H ðf Þ using spacings of order statistics. His estimator is based on the fact that the entropy of a continuous random variable X with CDF F can be expressed as H ðf Þ 5
ð1 log 0
d 21 F ðpÞ dp: dp
He proposed estimating the entropy by using the empirical distribution function and applying a difference operator instead of a differential operator. Let X1 ; . . .; Xn be a simple random sample of size n from the population of interest, with ordered values Xð1Þ , . . . , XðnÞ . Then Vasicek (1976)’s entropy estimator is given by HVsrs 5
n nn o 1X Xði1wÞ 2 Xði2wÞ ; log n i51 2w
where w # n=2 is an integer number called windows size, and XðiÞ 5 Xð1Þ for i , 1, and XðiÞ 5 XðnÞ for i . n. Ebrahimi et al. (1994) improved Vasicek (1976)’s entropy estimator by assigning different weights to the observations at the boundaries. Their corrected entropy estimator is given by HEsrs
n 1X n Xði1wÞ 2 Xði2wÞ ; 5 log n i51 ci w
19.2 EXTROPY ESTIMATION USING A RANKED SET SAMPLE
where
8 i21 > > > <11 m ci 5 2 > n2i > > :11 m
261
i#m m11#i#n2m ; n2m11#i#n
W is the window size defined as before, and XðiÞ 5 Xð1Þ for i , 1 and XðiÞ 5 XðnÞ for i . n. As a complement dual of entropy, Lad et al. (2015) introduced a new measure which is called extropy, as follows: J ðX Þ 5 2
1 2
ð 1N
f 2 ðxÞdx:
2N
Lad et al. (2015) also investigated several interesting properties of extropy and resolved a fundamental question of Shannon’s entropy measure. Qui (2017) provided some characteristic results, monotone properties as well as a lower bound for extropy of order statistics and record values. Qui and Jia (2018) used extropy for testing uniformity and showed that the resulting test has a good performance in comparison with its competitors in the literature including those tests based on entropy due to Zamanzade (2015). By following the lines of Vasicek (1976) and Ebrahimi et al. (1994), Qui and Jia (2018) developed two estimators for extropy. Let X1 ; . . .; Xn be a simple random sample of size n from the population of interest, with ordered values Xð1Þ , . . . , XðnÞ . Then the Qui and Jia (2018)’s extropy estimators are given by srs JQ1 52
where
n n 1 X 2w=n 1 X 2ci =n srs ; JQ2 52 ; 2n i51 Xði1wÞ 2 Xði2wÞ 2n i51 Xði1wÞ 2 Xði2wÞ
8 i21 > > i#m >11 < m ci 5 2 m11#i#n2m ; > n2i > > :11 n2m11#i#n m
W isthe window size defined as before, and XðiÞ 5 Xð1Þ for i , 1 and XðiÞ 5 XðnÞ for i . n. Let X½ij :i 5 1; . . .; H; j 5 1; . . .; m be a balanced ranked set sample of size n 5 mH from the population of interest, with the corresponding ordered value Z1 , . . . , Zn . Mahdizadeh and Arghami (2009) modified Vasicek’s (1976) entropy estimator to be used in balanced RSS. Their proposed estimator has the form HVrss 5
n nn o 1X log ðZi1w 2 Zi2w Þ ; n i51 2w
where Zi 5 Z1 for i , 1, and Zi 5 Zn for i . n. Zamanzade and Mahdizadeh (2017b) developed some entropy estimators in balanced RSS using entropy estimators proposed by Ebrahimi et al. (1994). The new estimator is given by HErss 5
n 1X n log ðZi1w 2 Zi2w Þ ; n i51 ci w
262
CHAPTER 19 EXTROPY ESTIMATION IN RANKED SET SAMPLING
where
8 i21 > > i#m > <11 m ci 5 2 m11#i#n2m ; > n2i > > :11 n2m11#i#n m
Zi 5 Z1 for i , 1, and Zi 5 Zn for i . n. By following the lines of Mahdizadeh and Arghami (2009) and Mahdizadeh and Zamanzade (2017b), we can develop extropy estimators for RSS as follows: rss JQ1 52
n n 1 X 2w=n 1 X ci w=n rss ; JQ2 52 ; 2n i51 Zi1w 2 Zi2w 2n i51 Zi1w 2 Zi2w
where ci is as defined before, Zi 5 Z1 for i , 1, and Zi 5 Zn for i . n. We conducted a simulation study to compare different extropy estimators in balanced RSS and SRS designs in terms of root of mean square error (RME). In doing so, we generated 100,000 samples of sizes n 5 10; 20; 30; 50 from standard normal, standard uniform, and standard exponential distributions. The values of set size H are taken to be 2 and 5, and the value of window size w is selected to Grzegorzewski and Wieczorkowski’s (1999) heuristic formula, i.e., pffiffiffi according w 5 n 1 0:5 , where ½x is the integer part of x. The imperfect rankings model that we utilize is the fraction-of-random-rankings model developed by Frey et al. (2007). Under this model, the distribution of ith judgment order statistic is a mixture of true ith order statistic and a random draw from the parent distribution, i.e.: F½i 5 λFðiÞ 1 ð1 2 λÞF;
where the parameter λA½0; 1 determines the quality of the ranking. The values of λ in this simulation study are selected from the set λAf0:5; 0:8; 1g, which corresponds to moderate, good, and perfect ranking, respectively. Tables 19.119.3 show the estimated RMSEs and biases of the extropy estimators. Table 19.3 presents the results when the parent distribution is standard normal. It can be seen that the RSS estimators outperform their SRS counterparts. In both SRS and RSS schemes, JQ2 always works better than JQ1 . The performance of any extropy estimator improves if the total sample size (n), the set size (H), or the value of (λ) increases, provided that other factors are fixed. The simulation results for standard exponential and standard uniform distributions are presented in Tables 19.2 and 19.3, respectively. The general trends are similar to those mentioned for Table 19.1.
19.3 EXTROPY-BASED TESTS OF UNIFORMITY IN RSS In this section, we evaluate the performance of extropy-based test of uniformity in RSS and compare it with its SRS counterpart using Monte Carlo simulation. Testing uniformity is a very important problem from a practical point of view, because goodness-of-fit test can be expressed as a problem of testing uniformity. This follows from the probability integral transform theorem which
19.3 EXTROPY-BASED TESTS OF UNIFORMITY IN RSS
263
Table 19.1 Estimated RMSE and Bias of Different Extropy Estimators When Parent Distribution is Standard Normal Distribution With Jðf Þ 5 2 0:141 RSS (λ 5 1) Jrss Q1 n 10 20 30 50
H 2 5 2 5 2 5 2 5
RSS (λ 5 0:8) J rss Q2
Jrss Q1
Jrss Q2
RMSE
Bias
RMSE
Bias
RMSE
Bias
RMSE
Bias
0.13 0.11 0.05 0.05 0.04 0.03 0.02 0.02
2 0.10 2 0.09 2 0.04 0.04 0.02 0.02 0.01 0.01
0.07 0.06 0.03 0.03 0.03 0.02 0.02 0.01
2 0.04 2 0.03 2 0.02 0.01 0.01 0.01 0.00 0.00
0.13 0.12 0.06 0.05 0.04 0.03 0.02 0.02
2 0.10 2 0.09 2 0.04 0.04 0.02 0.02 0.01 0.01
0.07 0.06 0.04 0.03 0.02 0.02 0.02 0.02
2 0.04 2 0.03 2 0.02 0.01 0.01 0.01 0.00 0.00
RSS (λ 5 0:5) Jrss Q1
SRS J rss Q2
Jsrs Q1
Jsrs Q2
n
H
RMSE
Bias
RMSE
Bias
RMSE
Bias
RMSE
Bias
10
2 5 2 5 2 5 2 5
0.13 0.12 0.06 0.05 0.04 0.04 0.02 0.02
0.10 2 0.10 2 0.04 2 0.04 2 0.03 2 0.02 2 0.01 2 0.01
0.07 0.07 0.03 0.03 0.02 0.02 0.02 0.02
2 0.04 2 0.04 2 0.02 2 0.02 2 0.01 2 0.01 0.00 0.00
0.13 0.13 0.06 0.06 0.04 0.04 0.02 0.02
2 0.10 2 0.10 2 0.04 2 0.04 2 0.02 2 0.02 2 0.01 2 0.01
0.07 0.07 0.04 0.04 0.02 0.02 0.02 0.02
2 0.04 2 0.04 2 0.02 2 0.02 2 0.01 2 0.01 0.00 0.00
20 30 50
states that if the variable of interest X follows a continuous distribution with cumulative distribution function F, then Y 5 F ðX Þ follows a standard uniform distribution. Qui and Jia (2018) showed that the standard uniform distribution maximizes the extropy J ðf Þ among all continuous distributions that possess a density function f and have a given support on (0,1). Based on this property, they then proposed the following test statistic for testing uniformity srs T srs 5 2 JQ2 ;
and they proposed the reject the null hypothesis of uniformity of large enough values of T srs . By following the lines of Qui and Jia (2018), one can also perform an extropy-based test of uniformity based on a ranked set sample using below test statistic rss T rss 5 2 JQ2 ;
and rejects the null hypothesis of uniformity of large enough values of T rss .
264
CHAPTER 19 EXTROPY ESTIMATION IN RANKED SET SAMPLING
Table 19.2 Estimated RMSE and Bias of Different Extropy Estimators When Parent Distribution is Standard Exponential Distribution With Jðf Þ 5 2 0:25 RSS (λ 5 1) J rss Q1
RSS (λ 5 0:8) Jrss Q2
Jrss Q1
J rss Q2
n
H
RMSE
Bias
RMSE
Bias
RMSE
Bias
RMSE
Bias
10
2 5 2 5 2 5 2 5
0.24 0.17 0.13 0.11 0.09 0.08 0.07 0.05
0.15 2 0.12 2 0.09 2 0.08 2 0.07 2 0.06 2 0.05 2 0.04
0.12 0.08 0.07 0.05 0.06 0.04 0.04 0.03
2 0.04 2 0.02 2 0.03 2 0.02 2 0.02 2 0.01 2 0.01 2 0.01
0.27 0.20 0.14 0.12 0.10 0.09 0.07 0.06
2 0.16 2 0.13 2 0.09 2 0.08 2 0.07 2 0.06 2 0.05 2 0.04
0.14 0.10 0.08 0.07 0.06 0.05 0.04 0.04
2 0.04 2 0.03 2 0.03 2 0.02 2 0.02 2 0.01 2 0.01 2 0.01
20 30 50
RSS (λ 5 0:5) J rss Q1
SRS Jrss Q2
Jsrs Q1
Jsrs Q2
n
H
RMSE
Bias
RMSE
Bias
RMSE
Bias
RMSE
Bias
10
2 5 2 5 2 5 2 5
0.27 0.24 0.14 0.13 0.10 0.10 0.07 0.07
2 0.16 2 0.15 2 0.09 2 0.09 2 0.07 2 0.07 2 0.05 0.05
0.15 0.13 0.08 0.08 0.06 0.06 0.05 0.04
2 0.05 2 0.04 2 0.03 2 0.02 2 0.02 2 0.02 2 0.01 2 0.01
0.26 0.26 0.15 0.15 0.11 0.11 0.07 0.07
2 0.16 2 0.16 2 0.09 2 0.09 2 0.07 2 0.07 2 0.05 2 0.05
0.15 0.15 0.09 0.09 0.07 0.07 0.05 0.05
2 0.05 2 0.05 2 0.03 2 0.03 2 0.02 2 0.02 2 0.01 2 0.01
20 30 50
rss Remark 1. We have not considered the test of uniformity based on JQ1 in our comparison set, rss rss because we have observed that JQ2 is uniformly better than JQ1 .
In order to compare the power of different tests of uniformity, the following alternative distributions are considered Ak :F ðxÞ 5 1 2 ð12xÞk ; 0 # x # 1; ( k 2x ; 0 # x # 0:5; Bk :F ðxÞ 5 1 2 2ð12xÞk ; 0:5 # x # 1; ( 0:5 2 2ð0:52xÞk ; 0 # x # 0:5; Ck :F ðxÞ 5 0:5 1 2ðx20:5Þk ; 0:5 # x # 1;
ðfor k 5 1:5; 2Þ ðfor k 5 1:5; 2; 3Þ
ðfor k 5 1:5; 2Þ
19.3 EXTROPY-BASED TESTS OF UNIFORMITY IN RSS
265
Table 19.3 Estimated RMSE and Bias of Different Extropy Estimators When Parent Distribution is Standard Uniform Distribution With Jðf Þ 5 2 0:5 RSS (λ 5 1) Jrss Q1
RSS (λ 5 0:8) J rss Q2
Jrss Q1
Jrss Q2
n
H
RMSE
Bias
RMSE
Bias
RMSE
Bias
RMSE
Bias
10
2 5 2 5 2 5 2 5
0.43 0.37 0.24 0.21 0.17 0.16 0.12 0.11
2 0.36 2 0.31 2 0.21 2 0.19 2 0.16 2 0.15 2 0.11 2 0.11
0.19 0.15 0.11 0.08 0.07 0.06 0.05 0.04
2 0.12 2 0.09 2 0.08 2 0.06 2 0.06 2 0.05 2 0.04 2 0.03
0.46 0.40 0.24 0.22 0.17 0.16 0.12 0.11
2 0.37 2 0.34 2 0.22 2 0.20 2 0.16 2 0.15 2 0.11 2 0.11
0.21 0.17 0.11 0.10 0.08 0.07 0.05 0.05
2 0.14 2 0.11 2 0.08 2 0.07 2 0.06 2 0.05 2 0.04 2 0.04
20 30 50
RSS (λ 5 0:5) J rss Q1
SRS J rss Q2
Jsrs Q1
Jsrs Q2
n
H
RMSE
Bias
RMSE
Bias
RMSE
Bias
RMSE
Bias
10
2 5 2 5 2 5 2 5
0.46 0.44 0.25 0.24 0.18 0.17 0.12 0.12
2 0.38 2 0.37 2 0.22 2 0.21 2 0.16 2 0.16 2 0.12 2 0.11
0.21 0.20 0.12 0.11 0.08 0.08 0.05 0.05
2 0.14 2 0.13 2 0.09 2 0.08 2 0.06 2 0.06 2 0.04 2 0.04
0.46 0.46 0.25 0.25 0.18 0.18 0.12 0.12
2 0.39 2 0.39 2 0.22 2 0.22 2 0.16 2 0.16 2 0.12 2 0.12
0.22 0.22 0.12 0.12 0.08 0.08 0.05 0.05
2 0.15 2 0.15 2 0.09 2 0.09 2 0.07 2 0.07 2 0.04 2 0.04
20 30 50
One can simply verify that as compared with uniform distribution, under alternative A, values closer to zero are more probable, whereas under alternative B, values near to 0.5 and under alternative C, values close to 0 and 1 are more probable. Under each alternative, we have generated 10,000 RSS and SRS samples of sizes 10, 20, 30, and 50. The value of set size in RSS is taken from HAf2; 5g and the quality of ranking is controlled by fraction of random ranking as described in Section 19.2 with λAf1; 0:8; 0:5g and the value of window size pffiffiffi (m) is selected from Grzegorzewski and Wieczorkowski’s (1999) heuristic formula, i.e., w 5 n 1 0:5 , where ½x is the integer part of x. The power estimates of extropy-based tests of uniformity at significant level α 5 0:1 are presented in Table 19.4. We observe from Table 19.4 that the extropy-based test of uniformity in RSS outperforms its rss counterpart in SRS. It is of interest to note that the power of TQ2 increases if sample size (n),set size (H), or the value of (λ) increases, provided that other factors are fixed. This is consistent with what we observed in the previous section.
266
CHAPTER 19 EXTROPY ESTIMATION IN RANKED SET SAMPLING
Table 19.4 Power Estimates of Extropy-Based Tests of Uniformity for n 5 10, 20, 30, 50, and α 5 0:1 in SRS and RSS Designs RSS (k 5 2)
RSS (k 5 5)
SRS
ALt
λ51
λ 5 0:8
λ 5 0:5
λ51
λ 5 0:8
λ 5 0:5
A1.5 A2 B1.5 B2 B3 C1.5 C2 A1.5 A2 B1.5 B2 B3 C1.5 C2 A1.5 A2 B1.5 B2 B3 C1.5 C2 A1.5 A2 B1.5 B2 B3 C1.5 C2
0.23 0.45 0.26 0.49 0.85 0.12 0.19 0.36 0.78 0.37 0.77 0.99 0.24 0.53 0.50 0.93 0.49 0.91 1.00 0.35 0.75 0.72 1.00 0.70 0.99 1.00 0.55 0.95
0.22 0.44 0.25 0.49 0.85 0.12 0.20 0.34 0.74 0.35 0.73 0.99 0.24 0.51 0.48 0.91 0.48 0.90 1.00 0.35 0.74 0.70 0.99 0.69 0.99 1.00 0.54 0.95
0.21 0.41 0.23 0.45 0.83 0.12 0.19 0.33 0.71 0.32 0.71 0.99 0.23 0.49 0.47 0.89 0.46 0.90 1.00 0.34 0.73 0.69 0.99 0.68 0.99 1.00 0.53 0.94
0.27 0.58 0.33 0.64 0.95 0.12 0.20 0.44 0.90 0.46 0.87 1.00 0.26 0.58 0.57 0.98 0.57 0.97 1.00 0.37 0.80 0.84 1.00 0.81 1.00 1.00 0.61 0.98
0.23 0.50 0.27 0.55 0.90 0.12 0.20 0.38 0.81 0.39 0.81 1.00 0.24 0.53 0.52 0.95 0.53 0.95 1.00 0.36 0.78 0.78 1.00 0.76 1.00 1.00 0.58 0.97
0.23 0.45 0.27 0.51 0.86 0.13 0.20 0.36 0.76 0.38 0.77 0.99 0.25 0.53 0.48 0.92 0.50 0.92 1.00 0.37 0.76 0.68 0.99 0.70 0.99 1.00 0.54 0.95
n 5 10
n 5 20
n 5 30
n 5 50
0.21 0.42 0.23 0.45 0.82 0.12 0.19 0.33 0.70 0.32 0.71 0.99 0.23 0.50 0.44 0.87 0.44 0.88 1.00 0.33 0.72 0.67 0.99 0.67 0.99 1.00 0.52 0.94
REFERENCES Chen, H., Stasny, E.A., Wolfe, D.A., 2005. Ranked set sampling for efficient estimation of a population proportion. Stat. Med. 24, 33193329. Ebrahimi, N., Habibullah, M., Soofi, E., 1994. Two measures of sample entropy. Stat. Probab. Lett. 20, 225234. Frey, J., Ozturk, O., Deshpande, J.V., 2007. Nonparametric tests of perfect judgment rankings. J. Am. Stat. Assoc. 102 (478), 708717.
FURTHER READING
267
Grzegorzewski, P., Wieczorkowski, R., 1999. Entropy-based goodness-of-fit test for exponentiality. Commun. Stat.: Theory Methods 28, 11831202. Halls, L.K., Dell, T.R., 1966. Trial of ranked-set sampling for forage yields. For. Sci. 12, 2226. Kvam, P.H., 2003. Ranked set sampling based on binary water quality data with covariates. J. Agric. Biol. Environ. Stat. 8, 271279. Lad, F., Sanfilippo, G., Agro, G., 2015. Extropy: complementary dual of entropy. Stat. Sci. 30, 4058. Mahdizadeh, M., Arghami, N., 2009. Efficiency of ranked set sampling in entropy estimation and goodness-offit testing for the inverse gaussian law. J. Stat. Comput. Simul. 80, 761774. Mahdizadeh, M., Zamanzade, E., 2017a. Efficient body fat estimation using multistage pair ranked set sampling. To appear in Statistical Methods in Medical Research. Available from: https://doi.org/10.1177/ 0962280217720473. Mahdizadeh, M., Zamanzade, E., 2017b. A new reliability measure in ranked set sampling. To appear in Statistical Papers. Available from: http://dx.doi.org/10.1007/s00362-016-0794-3. McIntyre, G.A., 1952. A method for unbiased selective sampling using ranked set sampling. Aust. J. Agric. Res. 3, 385390. Qui, G., 2017. The extropy of order statistics and record values. Stat. Probab. Lett. 120, 5260. Qui, G., Jia, K., 2018. Extropy estimators with applications in testing uniformity, To appear in. Journal of Nonparametric Statistics. Shannon, C.E., 1948. A mathematical theory of communications. Bell Syst. Tech. J 27, 623656. Vasicek, O., 1976. A test for normality based on sample entropy. J. R. Stat. Soc., Ser. B 38, 5459. Wang, X., Lim, J., Stokes, L., 2016. Using ranked set sampling with cluster randomized designs for improved inference on treatment effects. J. Am. Stat. Assoc. 111 (516), 15761590. Zamanzade, E., 2015. Entropy testing uniformity based on new entropy estimators. J. Stat. Comput. Simul. 85 (16), 31913205. Zamanzade, E., Mahdizadeh, M., 2017a. A more efficient proportion estimator in ranked set sampling. Statistics and Probability Letters 129, 2833. Zamanzade, E., Mahdizadeh, M., 2017b. Entropy estimation from ranked set samples with application to test of fit. Rev. Colomb. Estad. 40, 119.
FURTHER READING Dell, T.R., Clutter, J.L., 1972. Ranked set sampling theory with order statistics background. Biometrics 28 (2), 545555. Takahasi, K., Wakimoto, K., 1968. On unbiased estimates of the population mean based on the sample stratified by means of ordering. Ann. Inst. Stat. Math. 20 (1), 131. Zamanzade, E., Wang, X., 2018. Comput. Stat. Available from: https://doi.org/10.1007/s00180-018-0807-x.