Some new results for the transmuted generalized gamma distribution

Some new results for the transmuted generalized gamma distribution

Journal of Computational and Applied Mathematics 352 (2019) 165–180 Contents lists available at ScienceDirect Journal of Computational and Applied M...

563KB Sizes 0 Downloads 50 Views

Journal of Computational and Applied Mathematics 352 (2019) 165–180

Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

Some new results for the transmuted generalized gamma distribution ∗

Abdus Saboor a , Muhammad Nauman Khan a , , Gauss M. Cordeiro b , Marcelino A.R. Pascoa c , Pedro L. Ramos d , Mustafa Kamal a a

Department of Mathematics, Kohat University of Science & Technology, Kohat, 26000, Pakistan Departamento de Estatística, Universidade Federal de Pernambuco, 50740-540, Recife, PE, Brazil c Departamento de Estatística, Universidade Federal de Mato Grosso, 78075-850, Cuiabá, MT, Brazil d Institute of Mathematical Science and Computing, University of São Paulo, São Carlos, Brazil b

article

info

Article history: Received 17 June 2017 Received in revised form 24 October 2018 MSC: 60E05 62E15 62F10 33C15 33C60 Keywords: Bayesian estimation Lifetime data Generalized gamma distribution Simulation study Transmuted family

a b s t r a c t The last decade is full on new classes of distributions that become precious for applied statisticians. Extending known distributions by adding parameters enables us to obtain more flexible models. We provide some new mathematical properties of the transmuted generalized gamma distribution defined from the family pioneered by Aryal and Tsokos (2011). We derive explicit expressions for some of its mathematical quantities. We present maximum likelihood and Bayesian estimators for the model parameters. The different estimators are compared using extensive numerical simulations. For the sake of illustration, we apply our proposed methodology to two real data sets, thus proving empirically that the current distribution is a simple alternative for lifetime data. © 2018 Elsevier B.V. All rights reserved.

1. Introduction The Weibull distribution has been used in different fields with many applications, see for example [1]. The hazard rate function (hrf) of this distribution can only be increasing, decreasing or constant. Thus, it cannot be used to model lifetime data with a bathtub shaped hazard rate such as human mortality and machine life cycles. To overcome this shortcoming, several generalizations and modified forms of the Weibull distribution have been discussed by different authors in recent years, among them, the generalized modified Weibull [2], beta Sarhan–Zaindin modified Weibull [3], modified Weibull [4], modified beta Weibull [5], McDonald Weibull [6] and Kumaraswamy Weibull [7] distributions. Extensions of the Weibull distribution are proposed for one or more of the following reasons: a physical or statistical theoretical argument to explain the mechanism of the generated data, an appropriate model that has previously been used successfully, and a model whose empirical fit is good to real data. ∗ Corresponding author. E-mail addresses: [email protected] (A. Saboor), [email protected] (M.N. Khan), [email protected] (G.M. Cordeiro), [email protected] (M.A.R. Pascoa), [email protected] (P.L. Ramos), [email protected] (M. Kamal). https://doi.org/10.1016/j.cam.2018.12.002 0377-0427/© 2018 Elsevier B.V. All rights reserved.

166

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

Provost et al. [8] defined the gamma–Weibull (‘‘GW’’ for short) distribution by the cumulative distribution function (cdf) (for x > 0, λ > 0, k > 0, k + ξ > 0)



+ 1, xk λ−k ( ) Γ ξk + 1

Γ

G(x) = 1 −

)

k

(xk λ−k )1+ξ /k

k −k 1 F1 (1 + ξ /k; 2 + ξ /k; −x λ ), Γ (1 + ξ /k) (1 + ξ /k) ∫∞ where Γ (a, x) = x ya−1 e−y dy is the complementary incomplete gamma function,

=

1 F1 (a; b; z) =

∞ ∑

(1)

(a)k z k /(b)k k!,

k=0

(θ )k = Γ (θ + k)/Γ (θ ) denotes the Pochhammer function and ℜ(b) > ℜ(a), ℜ(a) , ℜ(b) ̸ = 0, −1, −2, . . . , |z | < 1 (see, for instance, [9]). The probability density function (pdf) of the GW distribution is k λ−k

g(x) =

ke−x

xk+ξ −1 λ−k

(

Γ



) ξk +1 .

) +1

k

(2)

Adding new shape parameters to expand a model into a larger family of distributions plays a fundamental role in distribution theory. More recently, there has been an increased interest in defining new generators for univariate continuous families of distributions by introducing one or more additional shape parameter(s) to a baseline distribution. This induction of parameter(s) has proven useful in exploring tail properties and also for improving the goodness-of-fit of the proposed generator family. Aryal and Tsokos [10] pursued this line of research by means of an interesting method for adding a new parameter to an existing distribution. The new distribution provides more flexibility to model various types of data. If the baseline distribution has the cdf G(x) and pdf g(x), the transmuted extended distribution is defined by the cdf and pdf (for |α| ≤ 1) F (x) = (α + 1) G(x) − α G(x)2 ,

(3)

f (x) = g(x) − 2 α G(x) g(x),

(4)

and

respectively. Note that α = 0 in Eq. (4) yields the baseline distribution. Further details can be found in [11]. More recently, some distributions have been proposed in the class (4). For example, Aryal and Tsokos [10] defined the transmuted Weibull for modelling the tensile fatigue characteristics of a polyester/viscose yarn. Here, we generalize their model by applying the transmuted technique [10] to Eq. (1), by assuming both k > 0 and ξ > 0, which defines the so-called transmuted generalized gamma (TGG) distribution. Then, the cdf, survival function, pdf and hrf of the TGG distribution are obtained from Eqs. (3) and (4) (for x > 0, ξ > 0, k > 0, |α| ≤ 1) as

[ F (x) = (α + 1) 1 −



+ 1, xk λ−k ( ) Γ ξk + 1

Γ

[ S(x) = 1 − (α + 1) 1 −

f (x) =

=

kλ−k−ξ

Γ

k+ξ −1

( k+ξ )2 x

)]

e

Γ

[ −α 1−

k



+ 1, xk λ−k ( ) Γ ξk + 1

)]

[

2α Γ

(



[ +α 1−

k

−xk λ−k

Γ

k+ξ k

,x λ

k −k

)

+ 1, xk λ−k ( ) Γ ξk + 1

) ]2 ,

k

Γ



+ 1, xk λ−k ( ) Γ ξk + 1

) ]2

k

+ (1 − α )Γ

(

(5)

k+ξ

,

(6)

)]

k

k

k(α + 1)e−x

Γ

k λ−k

(ξ k

xk+ξ −1 λ−k−ξ

)



2kα e−x

Γ

+1

k λ−k

(ξ k

x2k+2ξ −1 λ−2k−2ξ

) ( ) + 1 Γ ξk + 2

× 1 F1 (1 + ξ /k; 2 + ξ /k; −xk λ−k )

(7)

and h(x) =

e−x

) ( )] , xk λ−k + (1 − α ) ξ Γ ξk , ( )[ ( ) ( )] Γ k+ξ , xk λ−k α Γ k+ξ , xk λ−k + (1 − α ) Γ k+ξ k k k

k λ−k

xk+ξ −1 λ−k−ξ 2kα Γ

[

( k+ξ k

(8)

respectively. Hereafter, the four-parameter random variable X having pdf (7) will be denoted by X ∼ TGGα (λ, ξ , k). Lucena et al. [12] defined this distribution, but they did not obtain mathematical properties using special functions. We emphasize that the

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

167

Fig. 1. Plots of the TGG pdf for some parameter values.

Fig. 2. Plots of the TGG hrf for some parameter values.

properties derived here for X involve special functions that can easily be computed in analytical facilities available in standard softwares. Further, some plots of the TGG pdf (7) are displayed in Fig. 1. Fig. 2 illustrates some possible shapes of the hrf (8) and it can be noted that it can have upside-down bathtub and bathtub shapes. The TGG pdf is important since it includes as sub-models several well-known distributions by choosing the parameter values in gamma–Weibull [8]; TGG0 (λ, 0, k) is the two-parameter Weibull; √ (7). In fact, TGG0 (λ, ξ , k) is the Provost-type √ √ TGG0 (φ 2, 1, k) stands for the Maxwell; TGG0 (σ −1 π, −1, 2) (σ > 0) refers to the half-normal; TGG0 (a 2, 0, k) gives the Rayleigh; TGG0 (θ , κ − 1, 1) (θ > 0, κ > 0) becomes the classical gamma; the generalized gamma (GG) distribution [13] arises when TGG0 (λ, ξ , k), where ξ > 0, k > 0; TGG0 (2, p/2 − 1, 1), p ∈ N, leads to the χ 2 distribution with p degrees of freedom; the exponential turns out to be TGG0 (λ, 0, 1), whereas TGG0 (λ, ξ , 1), ξ ∈ N ∪ {−1, 0} is the Erlang distribution, for instance. The rest of the paper is organized as follows. Some useful expressions for certain mathematical quantities of X are provided in Section 2. The estimation of the model parameters by maximum likelihood and a Bayesian procedure are addressed in Section 3. A simulation study is carried out in Section 4 to identify the most efficient estimators. We prove empirically in Section 5 the usefulness of the TGG distribution by comparing it with some other related distributions using two real data sets that originated from the engineering and biological sciences.

168

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

2. Mathematical properties 2.1. Moments In this section, we derive explicit expressions for the positive, negative, central and factorial moments of X . The rth ordinary real moment of X , say µ′r = E(X r ), is k(α + 1)λ−k−ξ

µr = ′







x

r +k+ξ −1 −xk λ−k

e

dx −

Γ k +1 0 ( ) × 1 F1 1 + ξ /k; 2 + ξ /k; −xk λ−k dx

)

(α + 1)λr

=





Γ

(

k+r +ξ





+1 Γ k +1 Γ ) ( k −k dx , × 1 F1 1 + ξ /k; 2 + ξ /k; −x λ k



Γ

k

+1 Γ

)

2kα λ−2k−2ξ

)

k

2kα λ−2k−2ξ

)



+2

k



∫ )

(ξ k



∫ +2

)

xr +2k+2ξ −1 e−x

k λ−k

0

xr +2k+2ξ −1 e−x

k λ−k

0

(9)

where k > 0 , ξ > 0. We consider the integral ∞

∫ J =

xr +2k+2ξ −1 e−x

k λ−k

1 + ξ /k; 2 + ξ /k; −xk λ−k dx .

(

1 F1

)

(10)

0

We have the following Mellin–Barnes integral representation for the confluent hypergeometric function 1 F1 . Further, 1 F1 is ,n a special case of the Meijer’s G function [14] denoted by Gm p,q (·| ·). See, for example, Theorem 36 [9]. For the definition of the Mellin–Barnes integral and the Meijer’s G function, see Appendix A:

Γ (b) 1 1 F1 (a; b; −τ t ) = Γ (a) 2 π i

c +i ∞



(τ t)s Γ (−s) Γ (a + s)

Γ (b + s)

c −i ∞

ds .

(11)

We can write J in Eq. (10) as ∞



xr +2k+2ξ −1 e−x

k λ−k

1 F1

(

1 + ξ /k; 2 + ξ /k; −xk λ−k dx

)

0

=

1 k

λ

2k+r +2ξ

Γ

(

2k + r + 2ξ

)

( 2 F1

k

ξ k

+ 1,

r k

+

2ξ k

+ 2;

ξ k

) + 2; −1

,

where |xk λ−k | < 1 , ℜ(r + k(2 + s) + 2ξ ) > 0 , ℜ(2 + ξ /k) > ℜ(1 + ξ /k) , ℜ(1 + ξ /k) , ℜ(2 + ξ /k), ℜ( kr + 0, −1, −2, . . . Substituting (12) in Eq. (9) gives

µ′r =

(α + 1)λr

Γ





(

k+r +ξ

2α λr

) −



k +1 Γ k +1 Γ ) ( r 2ξ ξ ξ × 2 F1 + 1, + + 2; + 2; −1 .

)



k

k

k

k

k



(

2k + r + 2ξ

(12) 2ξ k

+ 2) ̸=

)

k

+2

(13)

k



Consequently, using (13), if |α| ≤ 1, we have E(X r )⏐r =0 = 1. So, the TGG distribution is well-defined. The hth negative real moment can be determined by replacing r with −h in Eq. (13) E(X −h ) =

(α + 1)λ−h

Γ





(

k−h+ξ

2α λ−h

) −



k +1 Γ k +1 Γ ( ) ξ −h 2ξ ξ × 2 F1 + 1, + + 2; + 2; −1 .

)

k

k

k

k

(ξ k



+2

(

2k − h + 2ξ

)

k

k

(14)

Further, the central moments (µn ) and cumulants (κn ) of X are easily obtained from (13)

µn =

n ∑ k=0

(−1)

k

( ) n k

µ1 µn−k ′k



and

) n−1 ( ∑ n−1 κn = µn − κk µ′n−k , k−1 ′

k=1

respectively, where κ1 = µ1 . Thus, κ2 = µ2 − µ1 , κ3 = µ3 − 3µ′2 µ′1 + 2µ′13 , etc. Clearly, the skewness and kurtosis measures can be computed from the ordinary moments using well-known relationships. ′



′2



A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

169

The nth descending factorial moment of X (for n = 1, 2, . . .), say µ′(n) , is

µ′(n) = E [X (n) ] = E [X (X − 1) × · · · × (X − n + 1)] =

n ∑

s(n, j) µ′j ,

j=0

where s(n, j) = (j!) [d j /dx ]x=0 is the Stirling number of the first kind. The moments in Eqs. (13) and (14) can be evaluated numerically by standard statistical softwares. −1

j (n)

j

2.2. Generating function The moment generating function (mgf) M(t) of X provides the basis of an alternative route to analytical results compared with working directly with the pdf and cdf of X . First, k is assumed to be a rational number such that k = p/q, where p and q ̸ = 0 are co-prime integers, and ℜ(η) > 0, ℜ(θ ) > 0, ℜ(s) < 0. We use a result by [8] ∞



xη−1 es x−θ x dx = k

(2π )1−(q+p)/2 q1/2 p η−1/2

0

( ×

Gqp,,pq

(−s)η ( p )p ( θ )q − s

⏐ ⏐ , ⏐ 1 − i+η p ⏐ ⏐ j/q ,

q

i = 0, 1, . . . , p − 1 j = 0, 1, . . . , q − 1

) .

(15)

Further, the mgf of X can be expressed as M(t) =

k(α + 1)λ−k−ξ







x

k+ξ −1

e

tx

e

−xk λ−k

Γ k +1 0 ( ) × 1 F1 1 + ξ /k; 2 + ξ /k; −xk λ−k dx

=

)

k(α + 1)λ−k−ξ

Γ

) IA −

(ξ k

+1

dx −

2kα λ−2k−2ξ

Γ

(ξ k

+1 Γ

)

(ξ k



∫ )

+2

x2k+2ξ −1 et x e−x

0

2kα λ−2k−2ξ



Γ

k

k λ−k

)2 IB , +1

(16)

where the quantities IA and IB are defined below. By replacing η with k + ξ , θ with λ−k , s with t and k = p/q in Eq. (15), we obtain IA =

(2π )1−(q+p)/2 q1/2 p η−1/2 (−t)p/q+ξ

( Gpq,,pq

( p ) p ( θ )q − t

q

⏐ ⏐ ⏐ 1 − i+p/pq+ξ , ⏐ ⏐ j/q ,

i = 0, 1, . . . , p − 1 j = 0, 1, . . . , q − 1

) .

(17)

From a result given in [15, p. 187, Eq. 13], we have 1 F1 (a

; b; pz) 1 F1 (c ; d; qz) =

∞ ∑ (a)j (pz)j j=0

Setting a = b , c = 1 +

ξ

3 F2

−c , −b − j + 1, −j; d, −a − j + 1; −

q p

)

.

(18)

ξ

, p = q = −λ−k and z = xk in (18), we obtain )j ( ) ∞ ( ( ) ∑ −xk λ−k ξ ξ k −k e−x λ 1 F1 1 + ξ /k; 2 + ξ /k; −xk λ−k = F − j , − − 1 ; + 2 ; − 1 . 2 1 j! k k k

, d=2+

j! (b)j

(

k

(19)

j=0

By using (19), the quantity IB takes the form

)j ( )∫ ∞ ∞ ( ∑ −λ−k ξ ξ F − j , − − 1 ; + 2 ; − 1 x2k+2ξ +k j−1 et x dx 2 1 j! k k 0 j=0 )j ( ) ∞ ( ∑ −λ−k (−t)(−j−2)k−2ξ ξ ξ = Γ ((j + 2)k + 2ξ ) 2 F1 −j, − − 1; + 2; −1 , t < 0 . j! k k

IB =

(20)

j=0

Finally, the mgf of X can be obtained from Eqs. (16), (17) and (20). 2.3. Shannon entropy Entropy is a concept encountered in physics and engineering. The Shannon’s entropy for the continuous case is defined as follows: ∞



f (x) log[f (x)] dx .

H(f ) = − 0

(21)

170

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

Combining (7) and (21), we have after some algebra kλ−k−ξ

H(f ) = −

(

Γ

+ log 2α Γ

[ = − log

2α Γ

(

k+ξ k

,x λ

k −k

)

+ (1 − α ) Γ

(

k+ξ

)]

k

− (ξ + k − 1) log(x) − xk λ−k

( k+ξ )2 k

(

k+ξ

,x λ

k −k

k

kλ−k−ξ

Γ (

e

[

)

kλ−k−ξ

[

k+ξ −1 −xk λ−k

0

k

log

×

x

( k+ξ )2

Γ {





] +

( k+ξ )2

+ (1 − α )Γ

k λ−2k−ξ

Γ )

k

k+ξ

)





( k+ξ )2

e−x

(

k+ξ

)]} dx

k k λ−k

x2k+ξ −1

0

k

( )] k+ξ , xk λ−k + (1 − α )Γ dx k k [ ( ) ( )] ∫ k + ξ k −k kλ−k−ξ (ξ + k − 1) ∞ k+ξ −1 −xk λ−k k+ξ x e 2α Γ − ,x λ + (1 − α )Γ log(x)dx ( )2 k k 0 Γ k+ξ k [ ( ) ( )] ∫ ∞ k + ξ k −k k+ξ kλ−k−ξ k −k xk+ξ −1 e−x λ 2α Γ ,x λ + (1 − α )Γ − ( )2 k k 0 Γ k+ξ ) ( [k ( )] k+ξ k + ξ k −k + (1 − α )Γ × log 2α Γ ,x λ dx k k [ ] kλ−k−ξ k λ−2k−ξ kλ−k−ξ (ξ + k − 1) kλ−k−ξ = − log Ib − ( ( k+ξ )2 + ( k+ξ )2 Ia − ( k+ξ )2 )2 Ic , Γ k Γ k Γ k Γ k+ξ k [

× 2α Γ

where the quantity Ia is ∞



e−x

Ia =

k λ−k

[

x2k+ξ −1 2α Γ

(

k+ξ k

0

Since Γ (m, y) = Γ (m) − 2α Γ

(

k+ξ k

, xk λ−k

)

) ( )] k+ξ , xk λ−k + (1 − α )Γ dx. k

ym F (m m 1 1

; m + 1; −y), we can write ( ) ( ) ) ( k+ξ k+ξ 2αλ−k−ξ k+ξ ξ ξ k −k + (1 − α )Γ = (1 + α )Γ − ( + 1; + 2; −x λ , ) x 1 F1 ξ k k k k 1+ k

and then

[



∫ Ia =

e

−xk λ−k 2k+ξ −1

x

(1 + α )Γ

(

k+ξ

0

= (1 + α )Γ

( ×1 F1 =

ξ k

(

k+ξ k

+ 1;

(1 + α )λ2k+ξ



)∫

ξ

x2k+ξ −1 e−x 0

k k λ−k

)

2αλ−k−ξ

− (

1+

ξ

)x

k −k−ξ

2αλ dx − ( ) ξ 1+ k

ξ +k

( 1 F1





ξ k

+ 1;

ξ k

x3k+2ξ −1 e−x

+ 2; −x λ

k −k

)] dx

k λ−k

0

)

+ 2; −xk λ−k dx ( ) (ξ ) ( ) ( ) Γ k+ξ Γ k +2 2αλ2k+ξ 2ξ ξ 2ξ ξ k −( Γ + 3 F + 1 , + + 3 ; + 2 ; − 1 . ) 2 1 ξ k k k k k 1+ k k k

The above quantities Ib and Ic can be evaluated by numerical integration (see Davis and Rabinowitz [16], for a detailed discussion). 2.4. Quantile function Let x = QGW (u) be the quantile function (qf) of the GW distribution given by (2) by assuming k > 0 and ξ > 0. First, the inverse of the complementary gamma function Γ (a, x), say x = Γ −1 (a, u), admits a power series expansion given by

Γ −1 (a, u) =

∞ ∑ i=0

si ui/a ,

(22)

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

171

where si = mi Γ (a + 1)i/a (for i ≥ 0) and m0 = 0, m1 = 1, m2 = 1/(a + 1), m3 = (3a + 5)/[2(a + 1)2 (a + 2)], . . .. Here, any coefficient mi+1 (for i ≥ 1) can be determined by the cubic recurrence equation (see the quantile gamma function in [17]). mi+1 =

{

1

i i−s+1 ∑ ∑

i (a + i)

− ∆(i)

r =1

i ∑

s (i − r − s + 2) mr ms mi−r −s+2

s=1

} r [r − a − (1 − a)(i + 2 − r)] mr mi−r +2 ,

r =2

where ∆(i) = 0 if i < 2 and ∆(i) = 1 if i ≥ 2. Second, we can easily obtain from Eq. (22)

( QGW (u) = λ

∞ ∑

)1/k si u

i/a

.

i=0

Let Q (u) be the qf of the TGG distribution obtained by inverting (5). For α ̸ = 0, we have

⎛[

1+α+



Q (u) = QGW ⎝

(1 + α )2 − 4α u 2α

]⎞ ⎠.

Finally, combining the last two equations, we obtain the qf of X as

⎡ ] ⎞i/a ⎤1/k ⎛[ √ ∞ 1 + α + (1 + α )2 − 4α u ∑ ⎢ ⎠ ⎥ Q (u) = λ ⎣ si ⎝ ⎦ . 2α

(23)

i=0

Eq. (23) could be useful to obtain numerical values for mathematical quantities based on the TGG qf such as some measures of skewness and kurtosis. 2.5. Linear representation Here, we obtain a linear representation for the density function (7) using the power series for the confluent hypergeometric function given by 1 F1 (s; s + 1; −w ) = s

∞ ∑ (−1)i w i i=0

(s + i) i!

.

We can write after some algebra f (x) = (1 + α ) πλ,ξ /k+1,k (x) − α

∞ ∑

wi πλ,νi ,k (x),

(24)

i=0

where νi = 2ξ /k + i + 2,

wi =

2(−1)i Γ (2ξ /k + i + 2) (ξ /k + i + 1)i!Γ (ξ /k + 1)2

and

πλ,ν,k (x) =

k

λ Γ (ν )

( x )ν k−1 λ

[ exp −

( x )k λ

] (25)

denotes the GW density function with positive parameters λ, ν and k. Eq. (24) reveals that some mathematical quantities of X can be obtained from those of the GW distribution. 2.6. Mean deviations Generally, there has been a great interest in obtaining the first incomplete moment of a distribution. Based on this quantity, we can determine, for example, the mean deviations, which provide important information about characteristics of a population. The mean deviations of X about the mean µ′1 = E(X ) and about the median M are defined by δ1 =

172

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

∫∞

∫∞ ′ ′ ′ |x − µ′1 | f (x)dx and δ2 = 0 |x − M | f (x)dx, respectively. They can also be ∫ zexpressed as δ1 = 2µ1 F (µ1 ) − 2m1 (µ1 ) ′ ′ and δ2 = µ1 − 2m1 (M), where F (µ1 ) is easily calculated from (5) and m1 (z) = 0 x f (x)dx. From Eq. (24), we can write 0

∞ ∑ λ (1 + α ) wi γ ([ξ + 1]/k + 1, [z /λ]k ) − α λ γ (νi + 1/k, [z /λ]k ), Γ (ξ /k + 1) Γ (νi ) i=0 ∫ x a−1 −y where γ (a, x) = 0 y e dy is the incomplete gamma function.

m1 (z) =

(26)

Eq. (26) can also be used to obtain Bonferroni and Lorenz curves, which are useful in many fields like reliability, demography, economics, insurance and medicine. For the TGG distribution, these curves can be calculated (for given 0 < π < 1) from B(π ) = m1 (q)/(πµ′1 ) and K (π ) = m1 (q)/µ′1 , respectively, where q = F −1 (π ) is the qf of X at π . In economics, for a given proportion of units whose income is lower than or equal to q, L(π ) can be interpreted as fractions of the total income, while the values of B(π ) refer to relative income levels. A further application of (26) refers to the mean residual life function of X defined by K (x) =





1 S(x)



1

(y − x) f (y) dy =

y f (y) dy − x =

S(x)

x



x

1 S(x)

[E(X ) − m1 (x)] − x,

where S(x) and f (x) are given in (6) and (7), respectively, and E(X ) follows from (13) with r = 1. 3. Estimation In this section, we discuss two different estimation methods to obtain the estimates of the parameters λ, ξ , k and α of the TGG distribution. 3.1. Maximum likelihood estimation In order to estimate the parameters of the TGG density function defined in (7), the log-likelihood of the parameter vector

θ = (λ, ξ , k, α ), say ℓ(θ ), is maximized with respect to θ given a set of observations x1 , . . . , xn . We have ( n ) [ ( ) ( )] n ∑ ∑ k + ξ −k k k+ξ −k ℓ (θ ) = log 2α Γ , λ xi + (1 − α )Γ −λ xki k

i=1 n

+ (k + ξ − 1)



k

i=1

)] [ ( k+ξ + n log(k). log (xi ) + n(−k − ξ ) log(λ) − 2n log Γ

(27)

k

i=1

The score components are

( n ) ) k+ξ −k k ( n ∑ ∂ℓ(θ ) ∑ 2kαλ−k−1 xki e−λ xi λ−k xki k −1 n(−k − ξ ) −k−1 = xki + = 0, ) ( k+ξ ) + kλ ( k+ξ k − k ∂λ λ , λ xi + (1 − α )Γ k 2α Γ k i=1 i=1 ( ) ( ) ( −k k ) ( k+ξ −k k ) ( ) (0) ( ξ ) 1, 1 3,0 −k k λ x | + log λ x 2 α G Γ , λ x − (α − 1)Γ k+ξ ψ +1 n k+ξ 2,3 i i i k k k 0, 0, k ∂ℓ(θ ) ∑ = ( ( k+ξ ) ( )) ∂ξ k 2α Γ , λ−k xki + (1 − α )Γ k+ξ k k i=1 ( ) n (0) k+ξ ∑ 2nψ k − n log(λ) = 0 , + log (xi ) − k

i=1

{ [( ) )( ( n ⏐ 1, 1 ∂ℓ(θ ) ∑ 1 1 k+ξ 3,0 −k k ⏐ = − 2 G2,3 λ xi ( k+ξ ) ( ) 2α k+ξ 0, 0, k ∂k k k 2α Γ , λ−k xki + (1 − α )Γ k+ξ k k i=1 ] ( )) ( −k k ) ( −k k ) k+ξ −1 ( −k k ) k + ξ −k k −λ−k xki − k k + log λ xi Γ , λ xi −e λ xi k λ xi log(xi ) − λ log(λ)xi k

+ (1 − α ) +

n ∑ i=1 n

(

1 k



k+ξ k2

log (xi ) − 2n

(

1 k

)

Γ



(

k+ξ k

k+ξ k2

)

)

ψ

ψ (0)

(0)

(

(

k+ξ k

k+ξ k

( +λ

−k

log(λ)

n ∑ i=1

)

( k+ξ −k k ) ( ) 2Γ , λ xi − Γ k+ξ ∂ℓ(θ ) ∑ k k = ( k+ξ ) ( ) = 0, −k xk + (1 − α )Γ k+ξ ∂α 2α Γ , λ i k k i=1

)}

+

n k

) xki

( −λ

−k

n ∑

) xki

log (xi )

i=1

− n log(λ) = 0 , (28)

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

173

where ψ (s) = d log[Γ (s)]/ds denotes the digamma function. The non-linear system of likelihood equations (28) can be solved numerically by iterative methods such as the quasi-Newton BFGS and Newton–Raphson type algorithms. Alternatively, direct maximization of the log-likelihood ℓ(θ ) can be performed by using well established routines like nlm, optimize and AdequacyModel of Maxlik in the R statistical package, SAS (PROC NLMIXED), Ox program (sub-routine MaxBFGS) and Limited-Memory quasi-Newton script for bound-constrained optimization (L-BFGS-B). 3.2. Bayesian analysis In the Bayesian approach, the information for the model parameters is obtained through a posterior marginal distribution. We adopt the simulation method of Markov Chain Monte Carlo (MCMC) such as the Metropolis–Hastings algorithm (see Gamerman and Lopes [18] for a detailed discussion) and Simulated Annealing (SANN) method (see, Kirkpatrick et al. [19]). The SANN algorithm was developed as an adaptation of the Metropolis–Hastings algorithm which plays an important role in Bayesian analysis. Since we have no prior information from historical data or from previous experiment, we assign conjugate but weakly informative prior distributions to the parameters. If we consider informative (but weakly) prior distribution, the posterior distribution is a well-defined proper distribution. Further, we assume that the elements of the parameter vector are independent and that the joint prior distribution of all unknown parameters has density function

π (λ, ξ , k, α ) ∝ π (λ) × π (ξ ) × π (k) × π (α ).

(29)

Here, λ ∼ Γ (a1 , b1 ), ξ ∼ Γ (a2 , b2 ), k ∼ Γ (a3 , b3 ) and α ∼ U(a, b), U(a, b) denotes a uniform distribution with mean 1 + b), variance 12 (b − a)2 and density function

1 (a 2

f (υ; a, b) =

1

for a ≤ υ ≤ b,

b−a

−∞ < a < b < ∞, Γ (ai , bi ) denotes a gamma distribution with mean ai /bi , variance ai /b2i and density function a

f (ν; ai , bi ) =

bi i ν ai −1 e−ν bi

Γ (ai )

,

ν > 0, ai > 0 and bi > 0. All hyper-parameters are specified. Combining the likelihood function (27) and the prior distribution (29), the joint posterior distribution for the model parameters takes the form

[ π (λ, ξ , k, α|x) ∝ ×

]n

kλ−k−ξ

Γ n ∏

( exp −λ−k

( k+ξ )2

n ∑

k+ξ −1

n [ ∏

2α Γ

xki

i=1

k

xi

)

(

k+ξ k

i=1

) ( )] k+ξ , xki λ−k + (1 − α )Γ k

× π (λ, ξ , k, α ).

i=1

The joint posterior density above is analytically intractable because the integration of the joint posterior density is not easy to perform. In this direction, we first obtain the full conditional distributions of the unknown parameters

( π (λ|x, ξ , k, α ) ∝ λ−n(k+ξ ) exp −λ−k

n ∑

) xki

i=1

[ π (k|x, λ, ξ , α ) ∝

]n

kλ−k

Γ

( k+ξ )2

2α Γ

−k

n ∑ i=1

k

(

k+ξ k

i=1

( exp −λ

n [ ∏

) xki

n ∏

xki

[

2α Γ

i=1

) ( )] k+ξ , xki λ−k + (1 − α )Γ × π (λ), k

(

k+ξ k

, λ

xki −k

)

+ (1 − α )Γ

(

k+ξ

)]

k

× π (k), [ ]n n ( ) ( )] ∏ ξ[ λ−ξ k + ξ k −k k+ξ π (ξ |x, λ, k, α ) ∝ × π (ξ ) x 2 α Γ , x λ + (1 − α ) Γ ( )2 i i k k Γ k+ξ i = 1 k and

π (α|x, λ, ξ , k) ∝

n [ ∏

2α Γ

i=1

(

k+ξ k

) ( )] k+ξ , xki λ−k + (1 − α )Γ ×π (α ). k

Since the full conditional distributions for λ, k, ξ and α do not have explicit expressions, we require the use of the Metropolis–Hastings algorithm. 4. Simulation study We evaluate the performance of the maximum likelihood method and the Bayesian approach for estimating the TGG parameters using Monte Carlo simulations with one thousand replications. We generate pseudo-random numbers by the

174

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180 Table 1 Proportion of failures that each method returned with N = 1, 000 for different scenarios. Classical inference

Bayesian inference

n

S1

S2

S3

S1

S2

S3

50 100 150 200 250 500 1000

0.851 0.765 0.693 0.600 0.547 0.349 0.071

0.954 0.728 0.615 0.547 0.518 0.420 0.328

0.822 0.735 0.720 0.713 0.720 0.713 0.720

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

0.000 0.000 0.000 0.000 0.000 0.000 0.000

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

Table 2 Relative biases and RRMSEs in parentheses under S1. Classical inference

Bayesian inference

n

λ = 0.5

ξ = 3.0

k = 5.0

α = 0.9

λ = 0.5

ξ = 3.0

k = 5.0

α = 0.9

50

−0.0685

0.3853 (0.5811)

0.3510 (0.4258)

−0.7776

−0.1142 (0.1348)

0.1943 (0.2957)

−1.5059

(1.1119)

0.0988 (0.5401)

100

−0.0681

0.4248 (0.5676)

0.3586 (0.4170)

−0.7501

−0.1074

(1.1153)

(0.1276)

0.2980 (0.4663)

0.2473 (0.3268)

−1.3393 (1.6035)

150

−0.0530

0.3961 (0.5301)

0.3833 (0.4282)

−0.5499

−0.0849 (0.1082)

0.3866 (0.5430)

0.3005 (0.3558)

−0.9897

(0.9500)

0.4868 (0.5782)

0.4430 (0.4642)

−0.2208

−0.0378 (0.0542)

0.4380 (0.4610)

−0.3200

(0.4796)

0.4848 (0.5746)

0.4654 (0.5604)

0.4193 (0.4458)

−0.2936

−0.0560 (0.0776)

0.3841 (0.4225)

−0.5863

(0.6204)

0.4371 (0.5506)

0.4919 (0.5399)

0.4788 (0.4903)

−0.1448

−0.0248 (0.0360)

0.4751 (0.4863)

−0.1453

(0.3182)

0.4638 (0.5170)

0.4490 (0.4845)

0.5113 (0.5184)

−0.1081

−0.0134 (0.0179)

0.5202 (0.5240)

−0.0749

(0.2382)

0.3686 (0.3979)

(0.0957) (0.0931) (0.0803) 200

−0.0318

250

−0.0366

500

−0.0259

1000

−0.0198

(0.0467) (0.0561) (0.0342) (0.0268)

(1.7042)

(1.3569) (0.6477) (0.9945) (0.3593) (0.1167)

inverse transformation method combined with the algorithm of [20]. All results are carried out using the statistical software R. We employ the optim subroutine in the SANN method for maximizing the log-likelihood function and finding the maximum a posterior (MAP) estimates (see, for instance, [21]). The relative bias (RB) and relative root mean square error (RRMSE) are



⎞ N ∑ 1 ⎝1 RBi = (θˆi,j − θi )⎠ θi N j=1

⎞1/2 N ∑ 1 ⎝1 and RRMSE i = (θˆi,j − θi )2 ⎠ , θi N ⎛

(30)

j=1

where N = 1,000 (see Muñoz and Rueda [22]). For the case of α , we consider RRMSE(α ) =

1

|α|

( ∑ N 1 N

)

ˆ j − α )2 to avoid j=1 (α

negative RRMSEs since α can take negative values. We consider seven different sample sizes: n = 50, 100, 150, 200, 250, 500 and 1,000. The selected parameters are: Scenario 1 (S1) (0.5, 3, 5, 0.9), Scenario 2 (S2) (1.5, 2, 1.5, −0.6) and Scenario 3 (S3) (2.5, 1.3, 0.5, −0.8). It is worth mentioning that the results from this simulation study are similar for other parameter values. We consider the same samples for the estimation procedures to make a fair comparison. Additionally, we compute the frequency that the numerical techniques failed in finding the estimates. In this case, we require an estimation method that is stable for the parameter estimates. Under the Bayesian approach, the following independent priors are considered for the SANN algorithm: λ ∼ Γ (0.01, 0.01), ξ ∼ Γ (0.01, 0.01), k ∼ Γ (0.01, 0.01) and α ∼ U(−1, 1). So, we have a vague prior distribution. The results are given in Tables 1–4. The values in Tables 2–4 reveal that both methods provide good estimates in terms of the quantities RB and RRMSE. Additionally, it is clear that their values decrease when the sample size increases as expected. However, we note that the maximum likelihood method fails in finding the estimates for a significant proportion of the samples which is undesirable. In this case, in many situations, the convergence program gives estimates for α greater than 1 or smaller than −1. On the other hand, from the Bayesian method, the use of priors especially for α helps the numerical methods to achieve the convergence. In fact, the figures in Table 1 do not report convergence problems using the Bayesian estimators. Hence, this method can be an effective solution to the problem of estimating the parameters of the TGG distribution. 5. Applications In this section, we analyse two real data sets. The first uncensored data set refers to the breaking stress of carbon fibres (in Gba) as reported in [23]. The second uncensored data set was previously studied by Lee and Wang [24] and represents the

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

175

Table 3 Relative biases and RRMSEs in parentheses under S2. Classical Inference

Bayesian Inference

n

λ = 1.5

ξ = 2.0

k = 1.5

α = −0.6

λ = 1.5

ξ = 2.0

k = 1.5

α = −0.6

50

0.1287 (0.4248)

1.8407 (2.0545)

0.4106 (0.5745)

−1.8511 (2.0721)

0.3054 (0.7870)

0.4453 (1.5455)

0.6719 (1.0361)

−0.2086 (1.1642)

100

0.2332 (0.4314)

1.4325 (1.6455)

0.5165 (0.6369)

−1.4564 (1.8783)

0.3134 (0.6758)

0.4888 (1.4035)

0.7086 (1.0124)

−0.2135 (1.1559)

150

0.2153 (0.4126)

1.3550 (1.5682)

0.4982 (0.6003)

−1.1935 (1.7373)

0.2427 (0.5914)

0.7008 (1.4216)

0.6106 (0.8887)

−0.2531 (1.2177)

200

0.1951 (0.4154)

1.3508 (1.5420)

0.4785 (0.5889)

−1.1251 (1.7283)

0.1737 (0.5007)

0.8579 (1.3985)

0.5066 (0.7391)

−0.2462 (1.2387)

250

0.1635 (0.3883)

1.3036 (1.4748)

0.4483 (0.5551)

−0.9278 (1.6247)

0.1192 (0.4427)

0.9340 (1.3631)

0.4384 (0.6357)

−0.1362 (1.1701)

500

0.0065 (0.2738)

1.1111 (1.2401)

0.3054 (0.3829)

0.0206 (1.0583)

−0.0215 (0.2458)

0.9936 (1.1641)

0.2790 (0.3484)

0.3173 (0.8087)

−0.0046

0.8139 (0.9195)

0.2814 (0.3120)

0.5236 (0.6275)

0.0204 (0.1905)

0.7394 (0.8766)

0.2998 (0.3347)

0.5486 (0.6036)

1000

(0.1718)

Table 4 Relative biases and RRMSEs in parentheses under S3. Classical Inference

Bayesian Inference

n

λ = 2.5

ξ = 1.3

k = 0.5

α = −0.8

λ = 2.5

ξ = 1.3

k = 0.5

α = −0.8

50

0.5406 (0.8490)

1.9978 (2.0727)

0.3471 (0.3725)

−1.2192

−0.0283

(1.4391)

(0.4716)

1.6175 (1.7960)

0.2025 (0.2534)

−0.3576 (0.8466)

0.2647 (0.6340)

1.9529 (2.0359)

0.3185 (0.3475)

−0.7925 (1.1793)

−0.2280

1.8468 (1.9453)

0.1792 (0.2316)

−0.2785 (0.7899)

0.0107 (0.4736)

1.8833 (1.9645)

0.2611 (0.2903)

−0.5332 (1.0242)

−0.2968

1.8601 (1.9544)

0.1691 (0.2165)

−0.2484 (0.7905)

−0.0309

1.7317 (1.8099)

0.2518 (0.0369)

−0.3197 (0.6031)

−0.2618

1.7967 (1.8807)

0.1857 (0.2193)

−0.2060 (0.7629)

1.6409 (1.7156)

0.2445 (0.2628)

−0.1828 (0.7461)

−0.2019

(0.3481)

1.6850 (1.7655)

0.2032 (0.2298)

−0.1132 (0.6734)

0.0217 (0.2837)

1.3683 (1.4043)

0.2591 (0.2686)

0.1139 (0.4149)

−0.0623 (0.2550)

1.4187 (1.4564)

0.2411 (0.2504)

0.1090 (0.4158)

0.0813 (0.2242)

1.2363 (1.2478)

0.2677 (0.2717)

0.2114 (0.2363)

0.0736 (0.1741)

1.2537 (1.2681)

0.2686 (0.2713)

0.1999 (0.2619)

100 150 200

(0.3857) 250 500 1000

−0.0598

(0.5204) (0.4921) (0.4857) (0.3890)

remission times (in months) of a random sample of 128 bladder cancer patients. Bladder cancer is a disease in which certain cells in the bladder become abnormal and multiply without control in the bladder. The bladder is a hollow, muscular organ in the lower abdomen that stores urine until it is ready to be excreted from the body. The most common type of bladder cancer begins in cells lining the inside of the bladder and is called transitional cell carcinoma. The simulation study in Section 4 indicates that the Bayesian estimators should be adopted for estimating the parameters of the TGG distribution. Then, we compare the results obtained from the fitted TGG distribution using the Bayesian method with those results from the fits of the gamma exponentiated exponential (GEE) [25], gamma–Weibull (GW) [8] and transmuted Weibull (TW) [10] distributions. We consider the deviance information criterion (DIC) [26] to compare the results. The densities of these distributions are given in the sequel (for x > 0).

• The GEE distribution: f (x) =

)}δ−1 ( )α−1 { ( − log 1 − e−λx λ α δ e−λ x 1 − e−λ x , λ, α, δ > 0. Γ (δ )

• The GW distribution: f (x) =

k λ−k−ξ x ξ +k−1 e−λ

Γ (1 + ξ /k)

− k xk

, ξ + k, λ > 0.

• The TW distribution: } η ( x )η−1 −( x )η { x η f (x) = e σ 1 − λ + 2 λ e−( σ ) , a , b > 0 , c ≥ 0. σ σ

176

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180 Table 5 Posterior summaries for the parameters from the fitted models to the carbon fibres data. Distributions

Parameter

Mean

SD

HPD (95%)



DIC

GEE (λ, α, δ )

α δ λ

10.0355 7.2474 0.2655

0.0099 0.0098 9.98 × 10−5

(10.0151; 10.0544) (7.2264; 7.2652) (0.2653; 0.2657)

0.9999 1.0004 1.0008

772.8930

λ ξ

0.0099 6.09 × 10−6 0.0095

(3.0428; 3.0820) (3.71 × 10−9 ; 1.97 × 10−5 ) (3.4194; 3.4584)

1.0024 0.9999 1.0027

261.3386

k

3.0618 7.92 × 10−6 3.4393

TW (η, σ , λ)

λ σ η

0.9999 3.0669 3.4411

9.95 × 10−5 0.0098 0.0009

(0.9997; 1.0001) (3.0465; 3.0852) (3.4392; 3.4431)

1.0030 1.0001 1.0008

280.0920

TGG (λ, ξ , k, α )

α

−0.5909

k

2.9238 2.00 × 10−6 2.7545

0.0030 0.0101 1.42 × 10−6 0.0099

(−0.5967; −0.5849) (2.9041; 2.9436) (3.59 × 10−10 ; 4.77 × 10−6 ) (2.7351; 2.7740)

1.0011 0.9999 0.9999 1.0002

GW (k, ξ , λ)

ξ λ

56.1077

Table 6 Posterior summaries for the parameters from the fitted models to the cancer patients data. Distributions

Parameter

Mean

SD

HPD (95%)



DIC

GEE (λ, α, δ )

α δ λ

1.2178 1.0014 0.1212

0.0099 0.0009 0.0069

(1.1987; 1.2378) (0.9995; 1.0034) (0.1083; 0.1358)

0.9998 1.0014 1.0007

781.8844

λ ξ

0.0093 0.0095 0.0073

(0.5762; 0.6153) (1.4097; 1.4489) (0.5055; 0.5345)

1.0031 1.0007 1.0006

288.635

k

0.5952 1.4290 0.5201

TW (η, σ , λ)

λ σ η

0.7450 14.6197 1.1334

0.0094 0.0091 0.0010

(0.7255; 0.7646) (14.6008; 14.6396) (1.1260; 1.1654)

1.0014 1.0005 1.0003

358.4317

TGG (λ, ξ , k, α )

α k

0.7303 0.6314 1.0309 2.6303

0.0013 0.0010 0.0039 0.0101

(0.7285; 0.7324) (0.6295; 0.6334) (1.0236; 1.038) (2.6098; 2.6492)

1.0015 1.0022 1.0010 0.9998

GW (k, ξ , λ)

ξ λ

53.0680

The following independent priors are considered to perform the Metropolis–Hastings algorithm: TGG distribution:

λ ∼ Γ (0.01, 0.01), ξ ∼ Γ (0.01, 0.01), k ∼ Γ (0.01, 0.01) and α ∼ U(−1, 1); GEE distribution: λ ∼ Γ (0.01, 0.01), α ∼ Γ (0.01, 0.01) and δ ∼ Γ (0.01, 0.01); GW distribution: k ∼ Γ (0.01, 0.01), ξ ∼ Γ (0.01, 0.01) and λ ∼ Γ (0.01, 0.01); TW distribution: η ∼ Γ (0.01, 0.01), σ ∼ Γ (0.01, 0.01) and λ ∼ U(−1, 1), so that we have vague prior distributions. Considering these prior density functions, we generate two parallel independent runs of the Metropolis–Hastings with size 100,000 for each parameter, disregarding the first 10,000 iterations to eliminate the effect of the initial values and, to avoid correlation problems, we consider a spacing of size 5, thus obtaining a sample of size 18,000 from each chain. To monitor the convergence of the Metropolis–Hastings, we perform the methods suggested by [27]. Further, we use the between and within sequence information, following the approach developed in [28], to obtain the potential scale reduction, ˆ R. For all cases, these values are close to one, thus indicating the convergence of the chain. The approximate posterior marginal density functions for the parameters from the TGG distribution are displayed in Figs. 3 and 4. In Tables 5 and 6, we report posterior summaries for the parameters of the models. Based on the criterion DIC, it can be noted that the TGG distribution has the best quality in the adjustment in relation to the other models. Here, SD represents the standard deviation from the posterior distributions of the parameters and HPD represents the 95% highest posterior density (HPD) intervals. 6. Conclusions In this paper, we study some mathematical properties of the new four-parameter transmuted generalized gamma (TGG) distribution obtained by applying the transmuted technique to the generalized gamma distribution. We provide computable representations for the positive and negative moments, factorial moments, generating function, Shannon entropy and mean deviations. We derive and compare, via intensive simulation experiments, the estimation of the parameters of the proposed distribution using two estimation methods. The simulation study reveals that the Bayesian method can be an effective solution to the problem of estimating the parameters of the TGG distribution. Finally, we apply the proposed distribution to two real data sets, thus proving empirically that this distribution is a useful model for lifetime applications. Acknowledgements The authors are grateful to the principal editor and three anonymous reviewers for their valuable suggestions and comments which have improved the paper substantially. The research of Abdus Saboor has been supported in part by the

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

177

Fig. 3. Approximate posterior marginal densities for the parameters from the TGG model for the carbon fibres data.

Higher Education Commission of Pakistan under NRPU project No. 3104. The research of Gauss Cordeiro has been supported by CNPq agency, Brazil. Pedro Ramos is grateful to the São Paulo State Research Foundation (FAPESP Proc. 2017/25971-0). Appendix A. A. Meijer G-function and Mellin–Barnes integral ,n The notation Gm p,q (·| ·) stands for the Meijer’s G-function [14] defined in terms of the Mellin–Barnes integral as ,n Gm p,q

∏m ∏n ∫ ( ⏐ ) 1 ⏐ a1 , . . . , ap j=1 Γ (bj − s) j=1 Γ (1 − aj + s) ∏q ∏p z⏐ = z s ds, b1 , . . . , bq 2π i C j=m+1 Γ (1 − bj + s) j=n+1 Γ (aj − s)

where 0 ≤ m ≤ q, 0 ≤ n ≤ p and the poles aj , bj are such that no pole of Γ (bj − s), j = 1, . . . , m coincides with any pole of Γ (1 − aj + s), j = 1, . . . , n; i.e. ak − bj ̸ ∈ N, while z ̸ = 0. C is a suitable integration contour which starts at −i∞ and goes to i∞ separating the poles of Γ (bj − s), j = 1, . . . , m, which lie to the right of the contour, from all poles of Γ (1 − aj + s), j = 1, . . . , n, which lie to the left of C. The integral converges if δ = m + n − 21 (p + q) > 0 and |arg(z)| < δπ , see [29, p. 143] and [14]. The G-function’s Mathematica code reads

MeijerG[{{a1 , . . . , an }, {an+1 , . . . , ap }}, {{b1 , . . . , bm }, {bm+1 , . . . , bq }}, z ]. Appendix B. R Codes

##package to use the GW distribution## require(VGAM) ###CDF of the TGG pTGG<-function(y,lambda,epsilon,k1,alpha){ cdf<-(alpha+1)*pgengamma.stacy(y, scale = lambda, d = k1,

178

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

Fig. 4. Approximate posterior marginal densities for the parameters from the TGG model for the cancer patients data.

k = ((epsilon/k1)+1)) -alpha*(pgengamma.stacy(y, scale = lambda, d = k1, k = ((epsilon/k1)+1))^2) return(cdf) } ##Auxiliary function to generate pseudo-random samples from the TGGG fxp<-function(x,lambda,epsilon,k1,alpha){ a<-runif(1,0,1) pTGG(x,lambda,epsilon,k1,alpha)-a } ##Function to generate pseudo-random samples from the TGG rTGG<-function(n,plambda,pepsilon,pk,palpha){ t<-c() for (i in 1:n) t[i]<-uniroot(fxp, c(0, 1000), lambda=plambda,epsilon=pepsilon, k1=pk,alpha=palpha)$root return(t) } ##Incomplete Gamma function igamma<-function(a,b){ (pgamma(a,b, lower=FALSE)*gamma(b)) } ## Log-likelihood## loglike <- function(theta) {

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

179

lambda <- theta[1] epsilon <- theta[2] k <- theta[3] alfa <- theta[4] l<- sum(log((2*alfa*igamma((lambda^(-k))*(t^k),((k+epsilon)/k))) +((1-alfa)*gamma((k+epsilon)/k))))-((lambda^(-k))*sum(t^k)) +(k+epsilon-1)*sum(log(t))+n*(-k-epsilon)*log(lambda) -2*n*lgamma((k+epsilon)/k)+n*log(k) return(l) } ## Log-poosterior## loglikemap <- function(theta) { hyp<-0.01 lambda <- theta[1] epsilon <- theta[2] k <- theta[3] alfa <- theta[4] likeh<- sum(log((2*alfa*igamma((lambda^(-k))*(t^k),((k+epsilon)/k))) +((1-alfa)*gamma((k+epsilon)/k))))-((lambda^(-k))*sum(t^k)) +(k+epsilon-1)*sum(log(t))+n*(-k-epsilon)*log(lambda) -2*n*lgamma((k+epsilon)/k)+n*log(k) posterior= likeh+ dgamma(lambda,shape=hyp,rate=hyp,log=TRUE) + dgamma(epsilon,shape=hyp,rate=hyp,log=TRUE)+ dgamma(k,shape=hyp,rate=hyp, log=TRUE) + dunif(alfa, min = -1, max = 1, log = TRUE) return(posterior) } ##Function to compute the MAP require(maxLik) maxSANN(loglikemap, start=c(ini1,ini2,ini3,ini4))$estimate ##ini1,ini2,ini3,ini4 are the initial values to start the iterative algorithm References [1] D.N.P. Murthy, M. Xie, R. Jiang, Weibull Models, vol. 358, Wiley, NewYork, 2003. [2] J.M.F. Carrasco, E.M.M. Ortega, G.M. Cordeiro, A generalized modified weibull distribution for lifetime modeling, Comput. Statist. Data Anal. 53 (2008) 450–462. [3] A. Saboor, S.B. Hassan, M.N. Khan, Beta sarhan—zaindin modified weibull distribution, Appl. Math. Model. 40 (2016) 6604–6621. [4] S.J. Almalki, J. Yuan, A new modified weibull distribution, Reliab. Eng. Syst. Saf. 111 (2013) 164–170. [5] M.N. Khan, The modified beta weibull distribution, Hacet. J. Math. Stat. 44 (2015) 1553–1568. [6] G.M. Cordeiro, E.M. Hashimoto, E.M.M. Ortega, The mcdonald weibull model, Statistics 48 (2014) 256–278. [7] G.M. Cordeiro, E.M.M. Ortega, S. Nadarajah, The kumaraswamy weibull distribution with application to failure data, J. Franklin Inst. 347 (2010) 1399— 1429. [8] S.B. Provost, A. Saboor, M. Ahmad, The gamma–weibull distribution, Pak. J. Stat. 27 (2011) 111–131. [9] E.D. Rainville, Special Function, The Macmillan Company, New York, 1960. [10] G.R. Aryal, C.P. Tsokos, Transmuted weibull distribution: a generalization of the weibull probability distribution, Eur. J. Pure Appl. Math. 4 (2011) 89–102. [11] W. Shaw, I. Buckley, The alchemy of probability distributions, beyond gram-charlier expansions, and a skew–kurtotic–normal distribution from a rank transmutation map, Research Report, 2007. [12] S.E.F. Lucena, A.H.A. Silva, G.M. Cordeiro, The transmuted generalized gamma distribution: properties and application, J. Data Sci. 13 (2015) 409–420. [13] E.W. Stacy, A generalization of the gamma distribution, Ann. Math. Stat. 33 (1962) 1187—1192. [14] C.S. Meijer, On the G-function I–VIII, Proc. K. Ned. Akad. Wet. 49 (1946) 227–237, 344–356, 457–469, 632–641, 765–772, (1063) 936–943–1072, 1165–1175. [15] A. Erdelyi, W. Magnus, F. Oberhettinger, F. Tricomi, Higher Transcendental Functions I, McGraw–Hill, 1953. [16] P.J. Davis, P. Rabinowitz, Methods of Numerical Integration, Courier Corporation, 2007. [17] W. Shaw, G. Steinbrecher, Quantile mechanics, European J. Appl. Math. 19 (2008) 87–112. [18] D. Gamerman, H.F. Lopes, Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference, Chapman and Hall/CRC, 2006. [19] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. [20] D.J. Best, D.E. Roberts, Algorithm as91 percentage points of the chi-squared distribution, Appl. Stat. 24 (1975) 385—388. [21] F. Louzada, P.L. Ramos, Efficient closed-form maximum a posteriori estimators for the gamma distribution, J. Stat. Comput. Simul. 88 (2018) 1134– 1146. [22] J.F. Muñoz, M. Rueda, New imputation methods for missing data using quantiles, J. Comput. Appl. Math. 232 (2009) 305–317.

180

A. Saboor, M.N. Khan, G.M. Cordeiro et al. / Journal of Computational and Applied Mathematics 352 (2019) 165–180

[23] [24] [25] [26] [27] [28] [29]

M.D. Nichols, W.J. Padgett, A bootstrap control chart for weibull percentiles, Qual. Reliab. Eng. Int. 22 (2006) 141—151. E. Lee, J. Wang, Statistical Methods for Survival Data Analysis, Wiley & Sons, New York, 2003. M.M. Ristić, N. Balakrishnan, The gamma–exponentiated exponential distribution, J. Stat. Comput. Simul. 82 (2012) 1191–1206. A. Gelman, J. Carlin, H. Stern, D. Rubin, Bayesian Data Analysis, Chapman and Hall/CRC, 2003. M.K. Cowles, B.P. Carlin, Markov chain monte carlo convergence diagnostics: a comparative review, J. Amer. Statist. Assoc. 91 (1996) 133–169. A. Gelman, D.B. Rubin, Inference from iterative simulation using multiple sequences (with discussion), Statist. Sci. 7 (1992) 457–472. Y.L. Luke, The Special Functions and Their Approximation, I, Academic Press, New York & London, 1969.