Statistics and Probability Letters 82 (2012) 798–804
Contents lists available at SciVerse ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
Characterizations of symmetric distributions based on Rényi entropy M. Fashandi, Jafar Ahmadi ∗ Department of Statistics, Ordered and Spatial Data Center of Excellence, Ferdowsi University of Mashhad, P.O. Box 91775-1159, Mashhad, Iran
article
abstract
info
Article history: Received 5 June 2011 Received in revised form 7 January 2012 Accepted 8 January 2012 Available online 16 January 2012
It is proved that the equality of Rényi entropies of upper and lower order statistics as well as upper and lower k-records is a characteristic property of symmetric distributions. Also, for Farlie–Gumbel–Morgenstern (FGM) family, it is shown that under some conditions the equality of entropies of concomitants of upper and lower order statistics as well as concomitants of upper and lower record values is a characteristic property for the uniform distribution. © 2012 Elsevier B.V. All rights reserved.
MSC: 62E10 62G30 Keywords: Müntz–Szász theorem Farlie–Gumbel–Morgenstern (FGM) family k-records Concomitant Shannon information
1. Introduction Let X be a random variable having an absolutely continuous cumulative distribution function (cdf) F with probability density function (pdf) f , then the entropy of order α or Rényi entropy is defined as Hα (X ) =
1 1−α
+∞
log
f α (x)dx
−∞
1
log E [f (X )]α−1 , 1−α where α > 0, α ̸= 1. It can be shown that limα→1 Hα (X ) = H (X ), where
=
H (X ) = −
(1)
+∞
f (x) log f (x)dx
(2)
−∞
is commonly referred to as the entropy or Shannon information measure of X . The properties and virtues of H (X ) have been thoroughly investigated by Shannon (1948). We refer the reader to Cover and Thomas (1991) for more details and recent references. Recently, several articles have been published in the subject of characterization of distribution function based on the entropies of order statistics and record values. Raqab and Awad (2000, 2001) obtained a characterization of the generalized Pareto distribution based on Shannon entropy of k-record statistics. Baratpour et al. (2007, 2008) established several characterizations based on Rényi entropies of order statistics and record values. Ahmadi and Fashandi (2009) proved that the equality of the entropy of the endpoints of record coverage is a characteristic property for symmetric distribution.
∗
Corresponding author. E-mail addresses:
[email protected] (M. Fashandi),
[email protected] (J. Ahmadi).
0167-7152/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2012.01.004
M. Fashandi, J. Ahmadi / Statistics and Probability Letters 82 (2012) 798–804
799
Some characterization results also have been done in terms of Fisher information in order statistics. Zheng (2001) obtained characterizations of the Weibull family based on the properties of Fisher information under types I and II censoring. Park and Zheng (2004) derived a necessary and sufficient condition under which two distributions have equal Fisher information in any order statistics. Hofmann et al. (2005) obtained a characterization for hazard rate function based on Fisher information in the first order statistic as well as in upper record values. In this paper, we intend to study the relation between the entropies of ordered data and the symmetric properties of the parent distribution. It may be noted that the class of symmetric distributions is so broad and includes several well-known distributions such as normal, logistic, student-t, Cauchy, Laplace and uniform distributions. First, we recall some notions of completeness properties. Definition 1. A sequence {φn }n≥1 in a Hilbert space H is called complete if the only element of H which is orthogonal to every φn is the null element, that is
⟨f , φn ⟩ = 0,
( n ≥ 1 ) ⇒ f = o,
here o stands for the zero element of H . It may be noted that if {φn , n ≥ 1} is a complete sequence in a Hilbert space H , then every element in H can be approximated to any desired degree of accuracy by the expression f = cn φn with cn = ⟨f , φn ⟩. See, for example, Higgins (2004) for more details on the theory of completeness. Now, we recall the following theorem, which is well-known as the Müntz–Szász Theorem and is used in the proofs of the results in this paper. Theorem 1 (Higgins, 2004, pp. 95–96). The set {xλ1 , xλ2 , . . . : 1 ≤ λ1 < λ2 < · · ·} forms a complete sequence in L2 (0, 1) if and only if +∞
1 λ− = +∞, j
where 1 ≤ λ1 < λ2 < · · · .
(3)
j=1
Hwang and Lin (1984) extended the Müntz–Szász Theorem for {f λj (x), j ≥ 1}, where f (x) is absolutely continuous and monotone on (a, b): Theorem 2 (Hwang and Lin, 1984). Let f (x) be an absolutely continuous function on (a, b) with f (a)f (b) ≥ 0, and let its derivative satisfy f ′ (x) ̸= 0 a.e. on (a, b). Then under the assumption (3) the sequence {f λj (x), j ≥ 1} is complete on (a, b) if and only if the function f (x) is monotone on (a, b). In what follows, we show that the equality of Rényi entropies of upper and lower order statistics as well as k-records is a characteristic property of symmetric distributions. These results are presented in Sections 2 and 3. In Section 4, it is shown that for Farlie–Gumbel–Morgenstern (FGM) family, the equality of entropies of concomitants of upper and lower order statistics as well as concomitants of upper and lower record values is a characteristic property for the uniform distribution. It is well-known that the uniform distribution is symmetric around the mean. 2. Characterizations based on order statistics Suppose that X1 , X2 , . . . , Xn are n independent and identically distributed (iid) continuous random variables, each with cdf FX and pdf fX with support SX . If the random variables X1 , X2 , . . . , Xn are arranged in ascending order and written as X1:n ≤ X2:n ≤ · · · ≤ Xn:n , then Xi:n is called the i-th order statistic and its pdf is given by fXi:n (x) = i
n i
FXi−1 (x)[1 − FX (x)]n−i fX (x),
x ∈ SX .
(4)
The order statistics have been used in a wide range of practical problems, we refer the reader to Arnold et al. (2008) and David and Nagaraja (2003) for more details. Ebrahimi et al. (2004) explored some properties of Shannon entropy of order statistics. Using (1), the transformation formula for Rényi entropy applied to Xi:n = FX−1 (Ui:n ) gives the following representations for Rényi entropy of order statistics (see Abbasnejad and Arghami, 2011a, p. 41): Hα (Xi:n ) = Hα (Ui:n ) +
1 1−α
log E [fX (FX−1 (Wi ))]α−1 ,
(5)
where Wi has the beta distribution with parameters (i − 1)α + 1 and (n − i)α + 1 and Ui:n stands for the i-th order statistic from a random sample of size n from Uniform (0, 1) distribution. Example 1. Let X be a random variable having Beta(θ , 1) distribution FX (x) = xθ , 0 < x < 1. Then, fX (FX−1 (u)) = θ u and by (5) we have
θ −1 θ ,
800
M. Fashandi, J. Ahmadi / Statistics and Probability Letters 82 (2012) 798–804
1
Hα (Xn−r +1:n ) − Hα (Xr :n ) = Hα (Un−r +1:n ) − Hα (Ur :n ) +
=
1 1−α
log E
(θ−1)(α−1) Wn−r +θ1
1−α
(θ−1)(α−1)
log E
Wn−r +θ1
(θ−1)(α−1) θ
− log E Wr
−
1 1−α
log E
Wr
(θ−1)(α−1) θ
,
(6)
1 where Wr has Beta((r − 1)α + 1, (n − r )α + 1). It is easy to see that Wr is stochastically smaller than Wn−r +1 for r ≤ n+ . 2 Then, the expression in the bracket of Eq. (6) is positive and negative whenever (θ − 1)(α − 1) is positive and negative, 1 respectively. Finally, we conclude that for r ≤ n+ , in this case, the sign of Hα (Xn−r +1:n ) − Hα (Xr :n ) depends on θ and is 2 positive and negative if θ < 1 and θ > 1, respectively. Also, Hα (Xn−r +1:n ) − Hα (Xr :n ) = 0, for θ = 1.
Now, we raise the question: ‘‘Under which conditions Hα (Xn−r +1:n ) − Hα (Xr :n ) = 0?’’. In the rest of the paper, we investigate the conditions for equality of Rényi entropies of upper and lower ordered data. By Theorem 1 and identity (5), in the next result we prove that the equality of Rényi entropies of upper and lower order statistics is a characteristic property for symmetric distributions. First, we present the following lemma. Lemma 1. Let X be a continuous random variable with cdf FX and pdf fX with support SX . Then, the identity fX (FX−1 (u)) = fX (FX−1 (1 − u)),
for almost all u ∈ (0, 1),
(7)
implies that there exists a constant c such that FX (c − x) = 1 − FX (c + x) for all x ∈ SX . d −1 d −1 (u) = 1/fX (FX−1 (u)), we find du FX (u) = − du FX (1 − u) −1 −1 for almost all u ∈ (0, 1). This implies FX (u) = −FX (1 − u) + d for all u ∈ (0, 1), where d is a constant. Then, we readily obtain FX (c − x) = 1 − FX (c + x) for all x ∈ SX , where c = d/2.
Proof. Suppose the identity (7) holds, then using the fact that
d −1 F du X
It may be noted that by Lemma 1, the identity (7) is equivalent to saying that FX is symmetric. Theorem 3. Let X1 , X2 , . . . , Xn be a random sample with size n from continuous cdf FX (x) and pdf fX (x). Then, FX is symmetric if and only if for a fixed r ≥ 1, Hα (Xr :n ) = Hα (Xn−r +1:n ), for all n ≥ r. d
Proof. For a symmetric distribution about µ (without loss of generality take µ = 0), it is obvious that Xr :n = −Xn−r +1:n . Then, the Rényi entropy of two statistics are equal. For the necessary part, from (5) and the assumptions, we have 0 = Hα (Xr :n ) − Hα (Xn−r +1:n )
=
1 1−α
log E [fX (FX−1 (Wr ))]α−1 −
1 1−α
log E [fX (FX−1 (Wn−r +1 ))]α−1 ,
(8) d
where Wr has the beta distribution with parameters (r − 1)α + 1 and (n − r )α + 1. Using the fact that Wn−r +1 = 1 − Wr , from (8) we find E [fX (FX−1 (Wr ))]α−1 − E [fX (FX−1 (1 − Wr ))]α−1 = 0.
Consequently, we get
1
(1 − u)(r −1)α u−r α [fX (FX−1 (u))]α−1 − [fX (FX−1 (1 − u))]α−1 uαn du = 0.
(9)
0
If (9) holds for a fixed r ≥ 1 and for all n ≥ r, then from the completeness property and Theorem 2, it follows that fX (FX−1 (u)) = fX (FX−1 (1 − u)),
(10)
for almost all u ∈ (0, 1). Thus, by Lemma 1 the proof is completed.
Remark 1. Let X1 , X2 , . . . , Xn be a random sample with size n from continuous cdf FX (x) and for a fixed r ≥ 1, Hα (Xn−r +1:n )− Hα (Xr :n ) = b, for all n ≥ r, where b is a constant and does not depend on n. Then by proceeding as in the proof of Theorem 3, we find FX (d(c − x)) + FX (c + x) = 1,
for all x ∈ R,
(11)
where d = e > 0 and c are constants. Obviously for d = 1, F is symmetric and for d ̸= 1, the identity (11) implies that F belongs to a class of non-symmetric distributions. b
A natural question which arises in connection with Theorem 3 is as follows: for a fixed r, are the entropies of all n ≥ r required to be F symmetric, or will certain subsequences be In view of Theorem 1, for fixed r, it is deduced that sufficient? +∞ 1 Theorem 3 holds for any subsequence {nj , j ≥ 1} such that j=1 n− = +∞, where r ≤ n1 < n2 < · · ·. j
M. Fashandi, J. Ahmadi / Statistics and Probability Letters 82 (2012) 798–804
801
Corollary 1. Under the assumptions of Theorem 3, F is symmetric if and only if for a fixed r ≥ 1, the Shannon information of upper order statistic and lower order statistic are equal, i.e., H (Xr :n ) = H (Xn−r +1:n ), for any subsequence {nj , j ≥ 1} such that +∞
1 n− = +∞, j
where r ≤ n1 < n2 < · · · .
j=1
Intuitively, we expect that the upper and lower order statistics of a symmetric distribution have the same information; this fact is confirmed by Theorem 3. 3. Characterizations based on k-records Let X1 , X2 , . . . be an infinite sequence of iid random variables with common continuous distribution function. An observation Xj is called an upper record value if its value exceeds all previous observations, i.e., Xj is an upper record if Xj > max{X1 , X2 , . . . , Xj−1 }. An upper k-record is basically the k-th largest observation in a partial sample. When new observations arrive, new k-records can occur. In infinite sequences, every new observation that is bigger than the current k-record will eventually become an upper k-record itself. For a formal definition, consider the definition given by Arnold et al. (1998, p. 43): for the continuous case, let T1U,k = k, RU1,k = X1:k and for n ≥ 2, let TnU,k = min{j : j > TnU−1,k , Xj > XT U −k+1:T U } n−1,k n−1,k where Xi:m denotes the i-th order statistic in a sample of size m. An analogous definition can be given for lower k-record values. Let T1L,k = k, RL1,k = Xk:k and for n ≥ 2, let TnL,k = min{j : j > TnL−1,k , Xj < Xk:T L }. n−1,k The sequence of upper and lower k-records is then defined by RUn,k = XT U −k+1:T U and RLn,k = Xk:T L for n ≥ 1, respectively. n,k n,k n,k For k = 1, note that the usual records are recovered. To fix these concepts, let us consider the following data presenting the first 20 values from the data of Arnold et al. (1998, Exercise 2.36); these data represent the average July temperatures of Neuenburg and Switzerland, during the period 1864–1883:
19.0, 20.1, 18.4, 17.4, 19.7, 21.0, 21.4, 19.2, 19.9, 20.4, 20.9, 17.2, 20.2, 17.8, 18.1, 15.6, 19.4, 21.7, 16.2, 16.4. For k = 3, the upper and lower 3-records and 3-record times are obtained to be n
1
TnU,3 RUn,3 TnL,3 RLn,3
3
2 5
3 6
4 7
5
6
7
10
11
18
18.4 19.0 19.7 20.1 20.4 20.9 21.0 3
4
12
14
16
19
20
20.1 19.0 18.4 17.8 17.4 17.2 16.4
In reliability analysis, the life length of an r-out-of-m system is the (m − r + 1)-th order statistic in a sample of size m. Consequently, the n-th upper k-record value can be regarded as the life length of a k-out-of-TnU,k system. Now, let {Xi , i ≥ 1} be a sequence of iid random variables with an absolutely continuous cdf FX (x) and pdf fX (x) with support SX . We recall that RUn,k is identical in distribution with the n-th usual upper record (k = 1) from cdf G(x) = 1−(1−F (x))k ; see Arnold et al. (1998, p. 43) or Theorem 22.6 in Nevzorov (2001, p. 93). Hence, the pdf of RUn,k can be easily obtained and is given by fRU (x) =
kn [− log(1 − FX (x))]n−1
n,k
(n − 1)!
{1 − FX (x)}k−1 fX (x),
x ∈ SX .
(12)
Replacing FX (x) by 1 − FX (x) in (12) the pdf of RLn,k is deduced. We refer the reader to Arnold et al. (1998) and references therein for more details on the theory and applications of record values. Using Theorems 1 and 2, we have the following result based on Rényi entropy of k-record values for symmetric distributions. Theorem 4. Let {Xi , i ≥ 1} be a sequence of iid continuous random variables from cdf FX (x) and pdf fX (x). Then the following two statements are equivalent: (i) X has a symmetric distribution; (ii) for a fixed k, Hα (RUnj ,k ) = Hα (RLnj ,k ), such that +∞ j =1
1 n− = +∞, j
where 1 ≤ n1 < n2 < · · · .
802
M. Fashandi, J. Ahmadi / Statistics and Probability Letters 82 (2012) 798–804
Proof. It is easy to show that (i) ⇒ (ii). To prove the sufficiency part, using (1) and (12), it can be shown that the transformation formula for the Rényi entropy applied to RUn,k = FX−1 (Un,k ) gives the following representations for the Rényi entropy of the n-th upper k-record (see Abbasnejad and Arghami (2011b, p. 2313), proof for k = 1): Hα (RUn,k ) = Hα (Un,k ) +
1 1−α
log E [fX (FX−1 (1 − e−Vn ))]α−1 ,
(13)
where Vn has the gamma distribution with parameters nα + 1 and (k − 1)α + 1, and Un,k stands for the n-th upper k-record from Uniform (0, 1) distribution. So, by (13) and assumptions, we have 0 = Hα (RUn,k ) − Hα (RLn,k )
=
1 1−α
log E [fX (FX−1 (1 − e−Vn ))]α−1 −
1
log E [fX (FX−1 (e−Vn ))]α−1 .
1−α
(14)
By (14), we deduce that 1
u(k−1)α [fX (FX−1 (u))]α−1 − [fX (FX−1 (1 − u))]α−1 [(− log u)α ]n du = 0.
(15)
0
If (15) holds for n = nj , such that
+∞ j =1
1 n− = +∞ where 1 ≤ n1 < n2 < · · ·, then we conclude that j
[fX (FX−1 (u))]α−1 − [fX (FX−1 (1 − u))]α−1 = 0, for almost all u ∈ (0, 1). The rest of the proof is similar to the proof of Theorem 3, and the result follows.
A similar result as stated in Remark 1 holds for k-records. Remark 2. Suppose the assumptions of Theorem 4 hold and in addition assume in part (ii) of Theorem 4, Hα (RUnj ,k ) −
Hα (RLnj ,k ) = b, where b is a constant and does not depend on nj and k. Then by proceeding as in the proof of Theorem 4, we get FX (d(c − x)) + FX (c + x) = 1,
for all x ∈ R,
where d = e > 0 and c are constants. b
It may be noted that the results in Theorem 4 and Remark 2 hold based on Shannon information. Moreover, taking k = 1, in Theorem 4, the results for usual records are deduced. Ahmadi and Fashandi (2009) have obtained similar result as in Theorem 4 based on Shannon information in terms of current records, for all n ≥ 1. Notice that if current records are available, then usual records (k = 1) can be extracted but the converse is not true. Thus, here we obtained the same result under weaker conditions. 4. Characterizations based on concomitant: FGM family In this section, based on some properties of concomitants of order statistics as well as record values from the Farlie–Gumbel–Morgenstern (FGM) family of bivariate distributions, we give characterizations for a uniform distribution, which is a member of the class of symmetric distributions. The FGM family has found extensive use in practice; see for example Section 2.2 of Balakrishnan and Lai (2009) and Chapter 5 of Drouet Mari and Kotz (2001). This family is characterized by the specified marginal distribution functions FX (x) and FY (y) of random variables X and Y , respectively, and a parameter λ, resulting in the bivariate distribution function given by FX ,Y (x, y) = FX (x)FY (y)[1 + λ(1 − FX (x))(1 − FY (y))],
(16)
where λ is known as the association parameter and |λ| < 1. It is obvious that (16) includes the case of independence. If the pdfs fX and fY exist, then the corresponding joint probability density function is given by fX ,Y (x, y) = fX (x)fY (y)[1 + λ(1 − 2FX (x))(1 − 2FY (y))],
(17)
where fX (x) and fY (y) are the marginal pdfs of X and Y , respectively. For more details about FGM family, see Drouet Mari and Kotz (2001) and Balakrishnan and Lai (2009). In the sequel, we give two characterizations for uniform distribution (Theorems 5 and 6) based on entropies of concomitants of order statistics as well as record values. First, consider a random sample (Xi , Yi ), i = 1, 2, . . . , n from a bivariate distribution. If the pairs are ordered by their X variates, then the Y variate associated with the r-th order statistic Xr :n of X will be denoted by Y[r :n] , 1 ≤ r ≤ n and is called the concomitant of the r-th order statistic. We refer the reader to David and Nagaraja (1998) for more details. Then, the pdf of Y[r :n] , denoted by fY[r :n] , is given by (see David and Nagaraja, 1998, p. 490) fY[r :n] (y) =
fY |X (y|x)fXr :n (x)dx, SX
(18)
M. Fashandi, J. Ahmadi / Statistics and Probability Letters 82 (2012) 798–804
803
where fXr :n (x) and SX are the pdf and support of Xr :n , respectively. For FGM family, from (4), (17) and (18) we readily find fY[r :n] (y) = fY (y)[1 + λ[r ,n] (1 − 2FY (y))],
(19)
where λ[r ,n] = (1 − n2r )λ. Obviously λ[n−r +1,n] = −λ[r ,n] . Using (2) and (19), we immediately find the following +1 representation for entropy of Y[r :n] from FGM family: H (Y[r :n] ) = −
fY[r :n] (y) log fY[r :n] (y)dy 1
=−
log[1 + λ[r ,n] (1 − 2u)] + log fY (FY−1 (u)) fU[r :n] (u)du
0
= H (U[r :n] ) − E log fY (FY−1 (U[r :n] )) ,
(20)
where U[r :n] stands for concomitant of the r-th order statistic in the bivariate FGM distribution with uniform marginals. We have the next result concerning the entropies of concomitants of order statistics for FGM family. First, let C be the class of all continuous pdf with connected support and monotone on it. It is well-known that the only connected subsets of R are intervals. Theorem 5. Let (Xi , Yi ), i = 1, . . . , n be iid pairs of bivariate random variables FGM distribution. Then, the following two statements are equivalent:
(i) Y has uniform distribution on its support; (ii) fY belongs to C and H (Y[r :n] ) = H (Y[n−r +1:n] ), for some fixed r and n, (1 ≤ r ≤ n). Proof. It is easy to show that (i) ⇒ (ii). We prove that (ii) ⇒ (i): H (Y[n−r +1:n] ) − H (Y[r :n] ) = H (U[n−r +1:n] ) − H (U[r :n] ) + E log fY (FY−1 (U[r :n] )) − E log fY (FY−1 (U[n−r +1:n] ))
= E log fY (FY−1 (U[r :n] )) − E log fY (FY−1 (1 − U[r :n] )) 1 fY (FY−1 (u)) = [1 + λ[r ,n] (1 − 2u)] log du fY (FY−1 (1 − u)) 0 1 fY (FY−1 (u)) = λ[r ,n] (1 − 2u) log du. fY (FY−1 (1 − u)) 0
(21)
From (21), H (Y[n−r +1:n] ) − H (Y[r :n] ) = 0 is equivalent to
1
(1 − 2u) log 0
fY (FY−1 (u)) fY (FY−1 (1 − u))
du = 0.
(22)
Notice that for 0 < u < 0.5, we have FY−1 (u) ≤ FY−1 (1 − u) and for 0.5 ≤ u < 1, we have FY−1 (1 − u) ≤ FY−1 (u). Since fY belongs to C , if fY is non-decreasing (non-increasing), then the integrand in (22) is non-positive (non-negative). So, fY must be a constant function on its support, implying that Y has a uniform distribution. Now, let (X1 , Y1 ), (X2 , Y2 ), . . . be a sequence of iid pairs of random variables with common absolutely continuous joint cdf FX ,Y . Let {RUn,1 , n ≥ 1} be the sequence of upper record values in the sequence of X ’s. Then the Y -variate associated with the X -value which qualified as the n-th record will be called the concomitant of the n-th record and will be denoted by RU[n,1] . See, for example, Arnold et al. (1998) for more details. The pdf of RU[n,1] , denoted by fRU , is given by (see Arnold et al., [n,1]
1998, p. 272) f RU
[n,1]
(y) =
fY |X (y|x)fRU (x)dx. SX
n,1
For FGM family in (16), we have an explicit expression, see for example Arnold et al. (1998, p. 274) f RU
[n,1]
(y) = fY (y)[1 + λ(1 − 2−n )(2FY (y) − 1)].
(23)
By (2) and (23), we have the following representation for entropy of concomitant of the n-th upper record: H (RU[n,1] ) = H (U[n,1] ) − E log fY (FY−1 (U[n,1] )) ,
(24)
where U[n,1] stands for concomitant of the n-th upper record in the bivariate FGM distribution with uniform marginals.
804
M. Fashandi, J. Ahmadi / Statistics and Probability Letters 82 (2012) 798–804
We have the next result regarding the entropies of concomitants of record values for FGM family. Theorem 6. Let {(Xi , Yi ), i ≥ 1} be a sequence of iid pairs of random variables bivariate FGM distribution. Then, the following two statements are equivalent:
(i) Y has uniform distribution on its support; (ii) fY belongs to C and H (RU[n,1] ) = H (RL[n,1] ), for some fixed n ≥ 1. Proof. The result follows from Eq. (24) and proceeding similarly to the proof of Theorem 5.
Acknowledgments The authors would like to thank the referee and associate editor for their comments and suggestions which significantly improved this paper. This research was supported by a grant from Ferdowsi University of Mashhad; No. MS90225AHM. References Abbasnejad, M., Arghami, N.R., 2011a. Rényi entropy properties of order statistics. Communications in Statistics Theory and Methods 40, 40–52. Abbasnejad, M., Arghami, N.R., 2011b. Rényi entropy properties of records. Journal of Statistical Planning and Inference 141, 2312–2320. Ahmadi, J., Fashandi, M., 2009. Some characterization and ordering results based on entropies of current records. Statistics and Probability Letters 79, 2053–2059. Arnold, B.C., Balakrishnan, N., Nagaraja, H.N., 1998. Records. John Wiley & Sons, New York. Arnold, B.C., Balakrishnan, N., Nagaraja, H.N., 2008. A First Course in Order Statistics. In: Classics in Applied Mathematics, vol. 54. Society for Industrial and Applied Mathematics, Philadelphia. Balakrishnan, N., Lai, C.D., 2009. Continuous Bivariate Distributions, second ed. Springer, New York. Baratpour, S., Ahmadi, J., Arghami, N.R., 2007. Some characterizations based on entropy of order statistics and record values. Communications in Statistics— Theory and Methods 36, 47–57. Baratpour, S., Ahmadi, J., Arghami, N.R., 2008. Characterizations based on Rényi entropy of order statistics and record values. Journal of Statistical Planning and Inference 138, 2544–2551. Cover, T.M., Thomas, J.A., 1991. Elements of Information Theory. A Wiley-Interscience Publication, Wiley, New York. David, H.A., Nagaraja, H.N., 1998. Concomitants of order statistics. In: Balakrishnan, N., Rao, C.R. (Eds.), Order Statistics: Theory & Methods. Elsevier, Amsterdam, pp. 487–513. David, H.A., Nagaraja, H.N., 2003. Order Statistics, third ed. John Wiley & Sons, Hoboken, New Jersey. Drouet Mari, D., Kotz, S., 2001. Correlation and Dependence. Imperial College Press. Ebrahimi, N., Soofi, E.S., Zahedi, H., 2004. Information properties of order statistics and spacings. IEEE Transactions on Information Theory 50, 177–183. Higgins, J.R., 2004. Completeness and Basis Properties of Sets of Special Functions. Cambridge University Press, New York. Hofmann, G., Balakrishnan, N., Ahmadi, J., 2005. Characterization of hazard function factorization by Fisher information in minima and upper record values. Statistics and Probability Letters 72, 51–57. Hwang, J.S., Lin, G.D., 1984. On a generalized moment problem II. Proceedings of the American Mathematical Society 91, 577–580. Nevzorov, Valery B., 2001. Records: Mathematical Theory. In: Translations of Mathematical Monographs, vol. 194. American Mathematical Society, Providence, RI. Park, S., Zheng, G., 2004. Equal Fisher information in order statistics. Sankhya¯ 66, 20–34. Raqab, M.Z., Awad, A.M., 2000. Characterizations of the Pareto and related distributions. Metrika 52, 63–67. Raqab, M.Z., Awad, A.M., 2001. A note on characterization based on Shannon entropy of record statistics. Statistics 35, 411–413. Shannon, C.E., 1948. A mathematical theory of communication. Bell System Technical Journal 27, 379–423. Zheng, G., 2001. A characterization of the factorization of hazard function by the Fisher information under type II censoring with application to the Weibull family. Statistics and Probability Letters 52, 249–253.