Notes on the exponentiated half logistic distribution

Notes on the exponentiated half logistic distribution

Accepted Manuscript Notes on the Exponentiated Half Logistic Distribution Jung-In Seo, Suk-Bok Kang PII: DOI: Reference: S0307-904X(15)00042-6 http:/...

304KB Sizes 0 Downloads 120 Views

Accepted Manuscript Notes on the Exponentiated Half Logistic Distribution Jung-In Seo, Suk-Bok Kang PII: DOI: Reference:

S0307-904X(15)00042-6 http://dx.doi.org/10.1016/j.apm.2015.01.039 APM 10390

To appear in:

Appl. Math. Modelling

Received Date: Revised Date: Accepted Date:

30 April 2013 28 March 2014 8 January 2015

Please cite this article as: J-I. Seo, S-B. Kang, Notes on the Exponentiated Half Logistic Distribution, Appl. Math. Modelling (2015), doi: http://dx.doi.org/10.1016/j.apm.2015.01.039

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Notes on the Exponentiated Half Logistic Distribution Jung-In Seo, Suk-Bok Kang∗ Department of Statistics, Yeungnam University, Gyeongsan 712-749, Korea

Abstract In this paper moment estimators and maximum likelihood estimators of unknown parameters in the exponentiated half-logistic distribution are derived, and an entropy estimator is obtained for the distribution. An exact expression of Fisher information is derived to obtain approximate confidence intervals for unknown parameters in the distribution, and for illustration purposes, the validity of the proposed estimation method is assessed using real data. Keywords: Entropy, Exponentiated half-logistic distribution, Fisher information, Maximum likelihood estimator, Moment estimator

1. Introduction Gupta et al. [1] proposed the concept of an exponentiated distribution and discussed an exponentiated exponential distribution (EED) with two parameters (its scale and shape). Gupta and Kundu [2] considered the EED as a alternative to the gamma distribution (GD) and the Weibull distribution. Mudholkar and Srivastava [3] introduced an exponentiated Weibull distribution for modeling the bathtub failure rate based on lifetime data. For climate modeling, Nadarajah [4] proposed a generalization of the Gumbel distribution referred to as the exponentiated Gumbel distribution and provided its mathematical properties. Recently, Kang and Seo [5] discussed the estimation of the scale parameter and reliability function of the exponentiated half-logistic distribution (EHLD) based on progressively Type-II censored samples. The cumulative distribution function (cdf) and probability density function (pdf) of the random variable X with the EHLD can be given by !λ 1 − e−θx F(x) = 1 + e−θx and 1 − e−θx f (x) = θλ 1 + e−θx



2e−θx , 1 − e−2θx

x > 0, θ, λ > 0,

(1)

where θ is the reciprocal of the scale parameter and λ is the shape parameter. Figure 1 presents the pdf of the standard EHLD for various shape parameters. It is observed that the pdf of the standard EHLD is a decreasing function for λ ≤ 1, whereas it is a right-skewed unimodal function for λ > 1. These properties resemble those of the GD and the EED. As a special case, if λ = 1, then the EHLD is the half-logistic distribution. The failure rate function of the EHLD can be expressed as h(x) =

2θλe−θx F(x) .  1 − e−2θx (1 − F(x))

∗ Corresponding

author. Email address: [email protected] (Suk-Bok Kang)

Preprint submitted to Elsevier

February 7, 2015

1.4

1.2

1

0.8

0.6

0.4

0.2

0 0

1

2

3

4

5

6

Figure 1: The pdf of the standard EHLD for various shape parameters. 1.4

1.2

1

0.8

0.6

0.4

0.2

0 0

1

2

3

4

5

6

Figure 2: The failure rate function of the standard EHLD for various shape parameters.

Note that the failure rate function approaches θ as x → ∞ based on L’Hospital’s rule. The failure rate function of the standard EHLD for various shape parameters is plotted in Figure 2. For λ < 1, it decreases to a positive constant and then increases to θ. For λ ≥ 1, it increases to θ, which is similar to the behavior of the failure rate function of the GD and the EED, whose shape parameters exceed 1. Gupta and Kundu [6] examined the behavior of failure rate functions of the GD and the EED. On the other hand, because the cdf of the GD does not have a not closed form unless the shape parameter is an integer, its reliability and failure rate functions cannot be expressed as a closed form. However, the cdf of the EHLD has an explicit expression. Section 2 develops some theorems for relationships between the EHLD and well-known probability distributions. Section 3 derives the method of moments estimators (MMEs), maximum likelihood estimators (MLEs), and an entropy estimator for the EHLD. Section 4 discusses Fisher information and approximate confidence intervals (CIs) for unknown parameters in the EHLD. Section 5 uses a real data set to check whether the EHLD fits the data better than other well-known distributions and assesses the validity of the proposed method, and Section 7 concludes the paper.

2

2. Related Distributions Suppose that X is a random variable with the exponentiated half-logistic pdf given in (1). Then some known distributions related to the EHLD can be derived using Theorem 2.1.5 in [7, p. 51]. Theorem 2.1. If ! 1 + e−θX Y = log , 1 − e−θX then Y has an exponential distribution with mean 1/λ.  −θx   y . e +1 Proof. Let y = log 1+e . Then x = log −θx ey −1 θ, and the Jacobian of the transformation is 1−e J=

2ey . θ 1 − e2y

Therefore, the density function of Y is g(y) = λe−λy ,

y > 0,

which is the pdf of the exponential distribution. Theorem 2.2. If ! 1 + e−θX Y=α , 1 − e−θX then Y has a Type-I Pareto distribution with parameters α and λ.  −θx   . Proof. Let y = α 1+e . Then x = log y+α y−α θ, and the Jacobian of the transformation is 1−e−θx J=

2α . θ α2 − y2

Therefore, the density function of Y is g(y) = λαλ y−λ−1 ,

y ≥ α,

which is the pdf of the Type-I Pareto distribution. Theorem 2.3. If 1 − e−θX Y = α + (β − α) 1 + e−θX

!λ ,

then Y has a uniform distribution on (α, β).    −θx λ (β−α)1/λ +(y−α)1/λ Proof. Let y = α + (β − α) 1−e . Then x = log θ, and the Jacobian of the transformation is 1+e−θx (β−α)1/λ −(y−α)1/λ J=

  2 (y − α)(β − α) 1/λ  . θλ(y − α) (β − α)2/λ − (y − α)2/λ

Therefore, the density function of Y is g(y) =

1 , β−α

which is the pdf of the uniform distribution. 3

α < y < β,

Theorem 2.4. If " !#1/α 1 + e−θX Y = β λ log , 1 − e−θX then Y has a Weibull distribution with parameters α and β. h  −θx i1/α h exp((y/β)α /λ)+1 i . Proof. Let y = β λ log 1+e . Then x = log −θx exp((y/β)α /λ)−1 θ, and the Jacobian of the transformation is 1−e h  α i !α 2α exp λ1 βy y J= h    i . β θλy 1 − exp 2 y α λ β Therefore, the density function of Y is α y g(y) = β β

!α−1

" !α # y exp − , β

y > 0,

which is the pdf of the Weibull distribution. Theorem 2.5. If   !  1 + e−θX λ   Y = µ − σ log  − 1 , −θX 1−e then Y has a logistic distribution with parameters µ and σ. " #,   1/λ  +1 (exp(− y−µ 1+e−θx λ σ )+1) Proof. Let y = µ − σ log 1−e−θx − 1 . Then x = log θ, and the Jacobian of the transformation is 1/λ −1 (exp(− y−µ σ )+1) h   i1/λ−1   2 exp − y−µ exp − y−µ σ +1 σ   . J=   2/λ θλσ exp − y−µ + 1 − 1 σ Therefore, the density function of Y is   exp − y−µ σ g(y) = h  i2 , σ 1 + exp − y−µ σ

−∞ < y < ∞,

which is the pdf of the logistic distribution. Theorem 2.6. If " !# 1 + e−θX Y = µ − σ log λ log , 1 − e−θX then Y has a Gumbel distribution with parameters µ and σ.   h  −θx i exp(exp(− y−µ σ )/λ)+1 Proof. Let y = µ − σ log λ log 1+e . Then x = log θ, and the Jacobian of the transformation is y−µ −θx 1−e exp(exp(− )/λ)−1 σ

  exp − σ exp − y−µ σ h    i . J= θλσ exp λ2 exp − y−µ − 1 σ 2 exp

h

1 λ



i y−µ

Therefore, the density function of Y is  y−µ  y − µ  1 exp − − exp − , σ σ σ which is the pdf of the Gumbel distribution. g(y) =

4

−∞ < y < ∞,

3. Estimation 3.1. Method of Moment Estimation If X follows an EHLD with parameters (θ, λ), then the corresponding k-th moment is ∞

Z   E X k = θλ

1 − e−θx 1 + e−θx

0

Given u =





2e−θx k x dx. 1 − e−2θx

 1−e−θx λ , 1+e−θx !#k   1 Z 1" 1 + u1/λ E Xk = k log du. θ 0 1 − u1/λ

with ! ∞ X 1+z z2 j−1 =2 1−z 2j − 1 j=1

log

for

z2 < 1,

(2)

the first moment is obtained as ! Z 1 1 1 + u1/λ E (X) = log du θ 0 1 − u1/λ 2λ = h1 (λ), θ

(3)

where h1 (λ) =

∞ X

1 . (2 j − 1)(2 j − 1 + λ)

j=1

Similarly, 

E X

2



1 = 2 θ

Z 0

1

"

1 + u1/λ log 1 − u1/λ

!#2 du.

Here the integrand is decomposed as " log

1 + u1/λ 1 − u1/λ

!#2

h  i2 h  i2     = log 1 + u1/λ + log 1 − u1/λ − 2 log 1 + u1/λ log 1 − u1/λ .

(4)

Then, with the use of the two series expansions j



X   z j+1 X 1 log(1 ± z) 2 = 2 (∓1) j+1 j + 1 i=1 i j=1 log (1 + z) log (1 − z) = −

for z2 < 1,

j−1 ∞ 2 j 2X X z (−1)i+1 j i=1 i j=1

for z2 < 1,

(5)

(6)

the second moment is obtained as   2λ E X 2 = 2 [h2 (λ) + h3 (λ)], θ 5

(7)

where h2 (λ) =

∞ X j=1

h3 (λ) =

∞ X j=1

j

X1 1 + (−1) j+1 , ( j + 1)( j + 1 + λ) i=1 i 2X j−1 1 (−1)i+1 . j(2 j + λ) i=1 i

Let X1 , . . . , Xn be independent and identically distributed from the EHLD with parameters (θ, λ). From (3) and (7), the following can be obtained: 2

X 2λ[h1 (λ)]2 = , m2 h2 (λ) + h3 (λ)

(8)

P where m2 = ni=1 Xi2 /n. Then the MME λ˜ can be determined by solving equation (8). Note that equation (8) is independent of θ. Therefore, once the MME λ˜ is obtained, the MME of θ can be obtained as 2λ˜  ˜  h1 λ . θ˜ = X 3.2. Maximum Likelihood Estimation Let X1 , . . . , Xn be independent and identically distributed from the EHLD with parameters (θ, λ). Then the corresponding likelihood function is !λ n Y 1 − e−θxi 2e−θxi n n L(θ, λ) = θ λ . 1 + e−θxi 1 − e−2θxi i=1 The log-likelihood function is given by log L(θ, λ) =n log θ + n log λ + n log 2 − θ

n X

xi + λ

i=1

n X i=1

! X n   1 − e−θxi −2θxi − log 1 − e . log 1 + e−θxi i=1

(9)

From (9), the likelihood equations for θ and λ are, respectively, n

n

X 2e−θxi X 1 + e−2θxi ∂ n log L(θ, λ) = + λ x − xi i ∂θ θ 1 − e−2θxi 1 − e−2θxi i=1 i=1 =0 and n

∂ n X 1 + e−θxi log L(θ, λ) = − log ∂λ λ i=1 1 − e−θxi

!

= 0. With θ assumed to be known, the MLE of λ can be obtained as n ˆ =P  −θXi  . λ(θ) n 1+e i=1 log 1−e−θXi

(10)

Note that, by Theorem 2.1, the MLE λˆ follows an inverse gamma distribution with the shape parameter n and the scale parameter λn. Therefore, the MLE λˆ has the following expectation and variance for n > 2:   λn E λˆ = n−1 6

(11)

and   Var λˆ =

(λn)2 . (n − 1)2 (n − 2)

From (11), it can be observed that the MLE λˆ is a biased estimator of λ but is an asymptotically unbiased estimator of it. If θ is unknown, then the MLE of θ can be obtained by maximizing the following profile log-likelihood function:  n ! n n X     X 1 + e−θxi  X −2θxi ˆ  log L θ, λ(θ) = n + n log θ + n log(2n) − θ xi − n log  log − log 1 − e .   1 − e−θxi i=1 i=1 i=1   We can obtain the MLE θˆ by using the Newton-Rapshon method. Then the MLE λˆ = λˆ θˆ can be calculated easily from (10). 3.3. Entropy Estimation Let X be a random variable with the cdf F(x) and the pdf f (x). Then the differential entropy of X can be defined based on [8] as Z ∞ H( f ) = − f (x) log f (x)dx. −∞

Suppose that X is a random variable with the exponentiated half-logistic pdf given in (2). Then the differential entropy of X can be written as H( f ) = − E[log f (X)] " !# h  i 1 + e−θX = − log θ − log λ − log 2 + θE(X) + λE log + E log 1 − e−2θX . −θX 1−e Here E(X) is given in (3), and by Theorem 2.1, " !# 1 + e−θX 1 E log = . 1 − e−θX λ Furthermore, given u =



 1−e−θx λ 1+e−θx

and log(1 + z) =

P∞

j=1 (−1)

z / j for −1 < z ≤ 1,

j+1 j

  1 − e−θx !λ 2e−θx log 1 − e−2θx dx 1 + e−θx 1 − e−2θx 0 Z 1 Z 1     1/λ = log 4u du − 2 log 1 + u1/λ du

Z h  i E log 1 − e−2θX = θλ



0

0 ∞

X (−1) j+1 1 . = log 4 − − 2λ λ j( j + λ) j=1 Therefore, the entropy H( f ) from the EHLD can be simplified as H( f ) =1 + log 2 − log θ − log λ −

1 + 2λ [h1 (λ) − h4 (λ)] , λ

(12)

where h4 (λ) =

∞ X (−1) j+1 . j( j + λ) j=1

˜ f ), can be obtained by substituting θ˜ and λ˜ into (12). The estimator of entropy based on the MME, denoted by H( ˆ f ), can be obtained by substituting θˆ and λˆ into Similarly, the estimator of entropy based on the MLE, denoted by H( (12). 7

4. Fisher Information Under certain regularity conditions [7, p. 516], the Fisher information matrix for (θ, λ) can be written as " # I11 (θ, λ) I12 (θ, λ) I(θ, λ) = I21 (θ, λ) I22 (θ, λ)    2    ∂  E − ∂22 log L(θ, λ)  E − log L(θ, λ)   ∂θ∂λ   . =   ∂θ ∂2 ∂2 E − ∂λ∂θ log L(θ, λ) E − ∂λ2 log L(θ, λ) From (9), n

n

X 2e−θxi (1 + e−2θxi ) X 4e−2θxi ∂2 n 2 log L(θ, λ) = 2 + λ xi2 −   x , 2 2 −2θxi −2θxi 2 i ∂θ θ 1 − e 1 − e i=1 i=1 n 2 −θxi X ∂ 2e − log L(θ, λ) = − xi , ∂θ∂λ 1 − e−2θxi i=1 −

− Then, with u =



∂2 n log L(θ, λ) = 2 . ∂λ2 λ

 1−e−θx λ , −θx 1+e  −θx  !λ Z ∞  e (1 + e−2θx ) 2  1 − e−θx 2e−2θx (1 + e−2θxi ) 2 E  x dx  x  = θλ 1 + e−θx (1 − e−2θx )3 0 1 − e−2θx 2 !#2 Z 1"  1 1 + u1/λ  −2/λ = 2 log u − u2/λ du. 1/λ 8θ 0 1−u

  The integrand log(·) 2 can be decomposed as given in (4), and then the above expectation can be obtained by using the series expansions (5) and (6) as  −θx   e (1 + e−2θx ) 2  λ  = 2 [h5 (λ) + h6 (λ)] , E  x (13) 2 −2θx 4θ 1−e where h5 (λ) =

∞ X 1 + (−1) j+1

j+1

j=1

h6 (λ) =

!X j 1 1 1 − , j − 1 + λ j + 3 + λ i=1 i

! 2X j−1 1 1 (−1)i+1 − . j 2 j − 2 + λ 2 j + 2 + λ i=1 i

∞ X 1 j=1

In the same way,   E 

1−e

!λ 1 − e−θx 2e−2θx x2 dx −θx −2θx )3 1 + e (1 − e 0 !#2 Z 1"  1 1 + u1/λ  −2/λ = log u + u2/λ − 2 du. 2 1/λ 16θ 0 1−u λ = 2 [h7 (λ) + h8 (λ)] , 8θ

 Z  2  x = θλ   −2θx 2

e−θx



8

(14)

where h7 (λ) =

∞ X 1 + (−1) j+1

j+1

j=1

h8 (λ) =

!X j 1 1 2 1 + − , j − 1 + λ j + 3 + λ j + 1 + λ i=1 i

! 2X j−1 1 1 2 (−1)i+1 + − . j 2 j − 2 + λ 2 j + 2 + λ 2 j + λ i=1 i

∞ X 1 j=1

Finally, from (2), # !λ Z ∞ e−θx 1 − e−θx 2e−2θx E x = θλ x dx 1 + e−θx (1 − e−2θx )2 1 − e−2θx 0 ! Z 1  1 1 + u1/λ  −1/λ = log u − u1/λ du 1/λ 4θ 0 1−u λ = h9 (λ), 2θ "

(15)

where h9 (λ) =

∞ X j=1

! 1 1 1 − . 2j − 1 2j − 2 + λ 2j + λ

With (13), (14), and (15), the Fisher information matrix for (θ, λ) can be obtained as " Q1 (λ) Q2 (λ) # 2 θ I(θ, λ) = n Qθ2 (λ) , 1 θ

(16)

λ2

where Q1 (λ) = 1 + λ2 [h5 (λ) + h6 (λ)]/2 − λ[h7 (λ) + h8 (λ)]/2 and Q2 (λ) = −λh9 (λ). Here the asymptotic variance-covariance matrix of the MLE can be obtained by inverting the Fisher information matrix (16). Then, by the asymptotic normality of the MLE, the approximate 100(1 − α)% CIs based on the MLEs θˆ and λˆ are q q     θˆ ± zα/2 Var θˆ and λˆ ± zα/2 Var λˆ ,     where zα/2 denotes the upper α/2 point of the standard normal distribution and Var θˆ and Var λˆ are the diagonal elements of the asymptotic variance-covariance matrix of the MLE. In the same way, the approximate 100(1 − α)% CIs based on the MME can be constructed for a comparison between these CIs and those based on the MLE. 5. Application It is known that a two-parameter gamma distribution can be well fitted to a number of rainfall regimes because of the flexibility of the shape parameter [9, 10], and therefore, it is frequently used to analyze rainfall. The present paper employs the real data from Hinkley [11], which represent 30 successive values for precipitation (in inches) in March for the Minneapolis/St. Paul area over a 30-year period (see Table 1). Table 2 and Figure 3 report descriptive statistics and a histogram for the data, respectively. It is observed that the distribution of this data is skewed to the right. For the assessment of the goodness of fit of the EHLD, MLEs for unknown parameters are calculated first,    followed by the application of three well-known tests, namely the Anderson-Darling A2 , Cramer-von Mises W 2 , and Kolmogorov-Smirnov (D) tests. The values for the test statistics are given in Table 3, and the test statistics are given in Appendix A. As shown in Table 3, p-values are very close to 1. To further verify the fit of the EHLD, the probabilityprobability (P-P) plot for the fitted EHLD is presented in Figure 4. The value of the correlation coefficient in the P-P plot is 0.99788. These results indicate that the EHLD fits the data very well. 9

0.77

1.74

0.81

1.20

1.95

1.20

0.47

1.43

3.37

2.20

3.00

3.09

1.51

2.10

0.52

1.62

1.31

0.32

0.59

0.81

2.81

1.87

1.18

1.35

4.75

2.48

0.96

1.89

0.90

2.05

Table 1: 30 successive values for precipitation (in inches) in March for Minneapolis/St. Paul.

Minimum

Maximum

Mean

Standard deviation

Skewness

Kurtosis

0.32

4.75

1.68

1.00

1.15

1.67

Table 2: Descriptive statistics for real data

8

Frequency

6

4

2

0 0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Figure 3: Histogram for real data

A2

W2

D

Statistics

0.11368

0.01508

0.05676

p-Values

0.99966

0.99990

0.99983

Table 3: Values for Anderson-Darling, Cramer-von Mises, and Kolmogorov-Smirnov test

10

Table 4 shows MLEs and MMEs for unknown parameters and estimates of entropy based on MMEs and MLEs. Finally, the approximate 95% CIs are obtained based on MLEs and MMEs by using the formulas in Section 4, and these values are given in Tables 5. As shown in Tables 4 and 5, the estimate of entropy based on MMEs is larger than that of entropy based on MLEs. Therefore, it can be seen that the uncertainty about this data is greater when the method of moment estimation is used. The approximate 95% CIs based on MMEs are shorter than those based on the MLEs. θˆ

θ˜

λˆ

λ˜

ˆ f) H(

˜ f) H(

1.28559

1.25415

2.35058

2.22614

1.26909

1.28766

Table 4: MLEs and MMEs and corresponding estimates of entropy

θˆ

λˆ

θ˜

λ˜

CIs

(0.87748, 1.69371)

(1.06428, 3.63688)

(0.85272, 1.65559)

(1.02467, 3.42761)

Lengths

0.81623

2.5726

0.80287

2.40294

Table 5: Approximate 95% CIs based on MLEs and MMEs.

6. Concluding Remarks This paper discusses the statistical properties of an EHLD with two parameters (its scale and shape). The EHLD is quite similar to the two-parameter family of distributions such as the GD and the EED. In addition, the paper considers EHLD-related distributions and develops moment and maximum likelihood estimation methods for unknown parameters in the EHLD. In addition, an exact expression of Fisher information for the unknown parameters as well as an entropy estimator for the EHLD is derived. The EHLD has a major advantage over the GD because its cdf has a closed form. Therefore, the reliability and failure rate functions of the EHLD can be easily calculated. These properties indicate that the EHLD can be an alternative to the GD or EGD. 1

Empirical cdf

0.8

0.6

0.4

0.2

0 0

0.2

0.4

0.6

Fitted exponentiated half logistic cdf

Figure 4: P-P plot for real data

11

0.8

1

Acknowledgments The authors are very grateful to the editors and the reviewers for their helpful comments. Appendix A. Let F0 (x) be the cdf assumed under H0 . Then the Anderson-Darling statistic A2 is n

A2 = −n −

   1X (2i − 1) log F0 (X(i) ) + log 1 − F0 (X(n+1−(i)) ) , n i=1

the Cramer-von Mises statistic W 2 is W = 2

n " X i=1

2i − 1 F0 (X(i) ) − 2n

#2 +

1 , 12n

and the Kolmogorov-Smirnov test statistics D is   D = max D+ , D− , where X(i) is the i-th order statistics, D+ = max1≤i≤n

i

 − F0 (X(i) ) ,

"n # i−1 − D = max1≤i≤n F0 (X(i) ) − . n References [1] R.C. Gupta, R.D. Gupta, P.L. Gupta, Modelling failure time data by Lehman alternatives, Commun. Stat. Theory M. 27 (1998) 887-904. [2] R.D. Gupta, D. Kundu. Exponentiated exponential family: an alternative to gamma and weibull distributions, Biom. J. 43 (2001) 117-130. [3] G.S. Mudholkar, D.K. Srivastava, Exponentiated Weibull family for analyzing bathtub failure-rate data, IEEE Trans. Reliab. 42 (1993) 299302. [4] S. Nadarajah, The exponentiated Gumbel distribution with climate application, Environmetrics. 17 (2006) 13-23. [5] S.B. Kang, J.I. Seo, Estimation in an exponentiated half logistic distribution under progressively Type-II censoring, Commun. Korean Stat. Soc. 18 (2011) 657-666. [6] R.D. Gupta, D. Kundu. Generalized exponential distributions, Austral. & New Zealand J. Stat. 41 (1999) 173-188. [7] G. Casella, R.L. Berger, Statistical Inference, Duxbury Press, 2002. [8] T.M. Cover, J.A. Thomas, Elements of Information Theory Wiley, Hoboken, NJ, USA, 2005. [9] D.S. Wilks, Conditioning stochastic daily precipitation models on total monthly precipitation, Water Resour. Res. 25 (1989) 1429-1439. [10] G, Husak, J. Michaelsen, C. Funk, Use of the gamma distribution to represent monthly rainfall in Africa for drought monitoring applications, Int. J. Climatol. 27 (2007) 935-944. [11] D. Hinkley, On quick choice of power transformations, Appl. Stat. 26 (1977) 67-69.

12