Relationships between distributions with certain symmetries

Relationships between distributions with certain symmetries

Statistics and Probability Letters 82 (2012) 1737–1744 Contents lists available at SciVerse ScienceDirect Statistics and Probability Letters journal...

222KB Sizes 0 Downloads 39 Views

Statistics and Probability Letters 82 (2012) 1737–1744

Contents lists available at SciVerse ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

Relationships between distributions with certain symmetries M.C. Jones ∗ Department of Mathematics and Statistics, The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK

article

abstract

info

Article history: Received 11 January 2012 Received in revised form 17 May 2012 Accepted 17 May 2012 Available online 29 May 2012

The genesis of two-way links between the inverse Gaussian and Birnbaum–Saunders distributions is explored and extended. The most general results apply to pairs of distributions with a general ‘S-symmetry’ structure involving a self-inverse function closely related to a transformation function with certain properties. These general results arise by transformation from very simple properties of the familiar Azzalinitype skew-symmetric distributions. They specialise again to relationships between R-symmetric and log-symmetric distributions, between various models related to the inverse Gaussian and Birnbaum–Saunders distributions, relationships involving the sinh–arcsinh transformation, and others. Simple random variate generation is a practical consequence of these relationships. © 2012 Elsevier B.V. All rights reserved.

Keywords: Birnbaum–Saunders distribution Inverse Gaussian distribution R-symmetry Random variate generation Skew-symmetric distribution

1. Introduction This paper concerns relationships between classes of distributions with certain symmetry properties. As well as being of intrinsic structural and explanatory interest, the results allow random variate generation from classes of distributions that appear to be less amenable to straightforward generation, via random variates generated from classes of distributions for which random variate generation is easy. The main, general, results, to be presented in Section 2, though simple, might appear to be rather obscure. So, by way of introduction, I first present perhaps the two most important special cases of the general results, one known, one not. 1.1. Inverse Gaussian and Birnbaum–Saunders distributions The first of these special cases is already known. It concerns the relationship between the inverse Gaussian distribution (e.g. Johnson et al., 1994, Chapter 15) and the Birnbaum–Saunders distribution (e.g. Johnson et al., 1995, Section 33.3). On the one hand, using notation designed to be consistent throughout the current paper, a random variable XIG > 0 has the inverse Gaussian (IG) distribution with density 1

1



1

fIG (x) = √ exp − 2 2σ 2π σ x3/2



θ

2



1

x− √

2 

x

,

x > 0,

(1.1)

with σ , θ > 0. (This √ matches with the usual parametrisation of the IG distribution (Johnson et al., 1994, (15.4a) on p. 261) √ if one takes σ = 1/ λ and θ = 1/ µ.) On the other hand, XBS > 0 has the Birnbaum–Saunders (BS) distribution with



Tel.: +44 1908 652209. E-mail address: [email protected].

0167-7152/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2012.05.014

1738

M.C. Jones / Statistics and Probability Letters 82 (2012) 1737–1744

density fBS (x) =



1



1

2 2π σ x3/2

(1 + θ x) exp −



1

2

2σ 2

θ

2



1

2 

x− √ x

,

x > 0.

(1.2)



(This matches with the usual parametrisation of the BS distribution (Johnson et al., 1995, p. 654) if σ = α/ β and √ θ = 1/ β .) It is immediately clear that the BS distribution is a 50:50 mixture of the IG distribution and its length-biased version (LBIG) with density θ 2 xfIG (x). However, the LBIG distribution is easily shown to be the distribution of 1/(θ 4 XIG ) and so there appears the well known result due to Desmond (1986) that

 XBS =

with probability 1/2, with probability 1/2.

XIG 1/(θ 4 XIG )

(1.3)

The converse of this relationship is less immediate (but also known):

 XIG |XBS =

with probability 1/(1 + θ 2 XBS ), with probability θ 2 XBS /(1 + θ 2 XBS ).

XBS 1/(θ 4 XBS )

(1.4)

It will be seen in Section 2 that this is an immediate consequence of the main results of this paper. From the point of view of IG random variate generation, the converse relationship (1.4) is very useful. This is because XBS , by its very construction (Birnbaum and Saunders, 1969), is an explicit transformation of an N (0, σ 2 ) random variable, Zσ , say:

± Zσ = θ

2



1

XBS − √ , XBS

XBS =

1 4θ 4

 Zσ ±



Z2 σ

+ 4θ

2 2

;

(1.5)

which sign is taken is unimportant because of the symmetry of the normal distribution. Despite its name, the IG distribution has no such simple transformation relationship to the Gaussian. Relationship (1.4), however, affords ready generation of an IG random variate from a BS random variate, utilising one further uniform random variable but with no rejection involved. In fact, (1.4) provides a standard method of IG generation, having been proposed by Michael et al. (1976) from a different standpoint. (Michael et al., 1976, specify which sign to take in the initial BS transformation but in fact there is no need to do so.) See also Chhikara and Folks (1989, Section 4.5). For book treatments of inverse Gaussian and Birnbaum–Saunders distributions and their relationships, see Marshall and Olkin (2007, Chapter 13) and especially Saunders (2007, Chapter 10). 1.2. R-symmetric and log-symmetric distributions The second special case of my main results is more general, and is new. It concerns the relationship between R-symmetric distributions (R for reciprocal; see Mudholkar and Wang, 2007) and log-symmetric distributions (Seshadri, 1965; Lawless, 2003; Jones, 2008). On the one hand, R-symmetric distributions have densities on x > 0 satisfying f (θ x) = f (θ /x) with θ > 0 the centre of R-symmetry (and mode of the distribution if it is unimodal). Their densities can be written in the form fR (x) = 2gσ

 x−

θ2



x

,

x>0

(1.6)

where gσ (y) = σ −1 g0 (σ −1 y), y ∈ R, and g0 is symmetric about zero. These are the Cauchy–Schlömilch distributions of Baker (2008); see also Chaubey et al. (2010). Log-symmetric distributions, on the other hand, are such that X /θ and θ /X have the same distribution, so their densities satisfy x2 f (θ x) = f (θ /x); their name comes from the equivalence of this to the ordinary symmetry of the distribution of log(X ) about log(θ ). These can be taken to have densities of the form



fL (x) = 2 1 +

θ2 x2



 gσ

x−

θ2



x

,

x > 0,

(1.7)

with gσ again being a scaled symmetric density on R. (The form for fL is general because x − (θ 2 /x) = 2θ sinh(log(x/θ )), so gσ is related to the symmetric distribution of log(x/θ ) through a further scaled symmetric transformation.) XL ∼ fL is, of course, related to Yσ ∼ gσ via

± Yσ =

θ2 XL

− XL ,

XL =

1 2

 Yσ ±



Yσ2 + 4θ 2



.

(1.8)

Jones (2007) investigated transformations (1.8), albeit with a different focus, and with θ = 1. Log-symmetric distributions are therefore 50–50 mixtures of R-symmetric distributions and their (1/x2 )-weighted counterparts, which are easily seen to be the distributions of θ 2 /XR , where XR ∼ fR , so it is the case that

 XL =

XR

θ 2 /XR

with probability 1/2, with probability 1/2.

(1.9)

M.C. Jones / Statistics and Probability Letters 82 (2012) 1737–1744

1739

Less immediately, it will also be found that



XL

XR |XL =

θ /XL 2

with probability XL2 /(XL2 + θ 2 ), with probability θ 2 /(XL2 + θ 2 ).

(1.10)

The above provides a new general explicit algorithm for generating random variates from R-symmetric distributions:

• generate Yσ from gσ ; • set XL = 12 (Yσ + Yσ2 + 4θ 2 ); • use a further uniform random variable to obtain XR from XL using (1.10). 1.3. The rest of the paper The remainder of the paper comprises two main sections and some closing remarks. Section 2 gives the main results of the paper framed in terms of a further generalisation of R-symmetry, called S-symmetry, (Theorem 2.1) and transformations thereof (Corollary 2.1). Remarks 2.2 and 2.4 in Section 2 describe how the specific results of Sections 1.1 and 1.2 arise from the general results of Section 2. In Section 3, I explore four more sets of special cases of the general results. The first concerns generalised inverse Gaussian and Birnbaum–Saunders distributions (Section 3.1), the second mixtures of inverse Gaussian and closely related models (Section 3.2), the third another tractable form of S-symmetric distribution (Section 3.3), and the fourth (Section 3.4) links with sinh–arcsinh transformations (Jones and Pewsey, 2009). Section 4 briefly sets the paper more firmly in context. 2. Main results As in Jones (2012), choose an increasing function Π : R → Ω , where Ω can be R+ or R, which satisfies

Π (y) − Π (−y) = y.

(2.1)

This corresponds to

π (y) + π (−y) = 1

(2.2)

where π = Π is the derivative of Π . The most natural special cases of this are where π is the distribution function of a symmetric distribution on R. It turns out (Jones, 2012) that if such a distribution has tails that are lighter than Cauchy, then Ω = R+ . An equivalent formulation to (2.1), given in Jones (2010) and shown to be equivalent in Jones (2012), is that ′

Π −1 (x) = x − s(x)

(2.3)

where s : Ω → Ω is a self-inverse decreasing function i.e. s(x) = s (x) with s (x) < 0 for all x ∈ Ω . A class of distributions that generalises the R-symmetric distributions has densities of the form −1

fS (x) = 2gσ (Π −1 (x)) = 2gσ (x − s(x)),



x ∈ Ω.

(2.4)

See Jones (2010) when Ω = R+ . The symmetry of this form is the S-symmetry mentioned, but not pursued, in Section 3.1.3 of Jones (2010): simply, fS (s(x)) = fS (x) for all x ∈ Ω . The central (modal, if unimodal) point of S-symmetric distributions is x0 such that x0 = s(x0 ). Analogously, the relevant generalisations of the log-symmetric distributions are the transformation densities of the form f T ( x) =

gσ (Π −1 (x))

π (Π −1 (x))

= (1 − s′ (x))gσ (x − s(x)),

x ∈ Ω.

(2.5)

The symmetry of these distributions is that if XT ∼ fT , then XT and s(XT ) have the same distribution. Theorem 2.1. Let XT and XS be random variables following the distributions with densities fT and fS , respectively. Then,

 XT =

XS s(XS )

with probability 1/2, with probability 1/2.

(2.6)

Conversely,

 XS |XT =

XT s(XT )

with probability 1/(1 − s′ (XT )), with probability − s′ (XT )/(1 − s′ (XT )).

(2.7)

Proof. Relationship (2.6) is immediate. An insightful way to demonstrate relationship (2.7) is as follows. Introduce an ‘Azzalini-type’ skew-symmetric density on R (Azzalini, 1985) of the general type investigated by Wang et al. (2004). This is formed by multiplying the symmetric density gσ by the ‘skewing’ function π which satisfies (2.2): gA (y) = 2π (y)gσ (y),

y ∈ R.

(2.8)

1740

M.C. Jones / Statistics and Probability Letters 82 (2012) 1737–1744

If Yσ ∼ gσ , then YA ∼ gA , where with probability π (Yσ ), with probability 1 − π (Yσ ).



Yσ −Y σ

YA |Yσ =

(2.9)

Now, by construction XT = Π (Yσ ) and, as is easily seen XS = Π (YA ) (Jones, 2012, Section 3.1). Applying Π to (2.9) results in (2.7), noting that

Π (−Yσ ) = Π (Yσ ) − Yσ = XS − Π −1 (XS ) = s(XS ) (using (2.1) and (2.3)) and 1

π (Π −1 (XT )) = (using (2.3) again).

(Π −1 (XT ))′

=

1 1 − s′ (XT )



Remark 2.1. The proof of Theorem 2.1 shows that the results in this paper are, in fact, driven by the very simple relationships between the skew-symmetric density gA and its symmetric progenitor gσ . Remark 2.2. Relations (1.9) and (1.10) connecting R-symmetric distributions to log-symmetric distributions are the special cases of (2.6) and (2.7) with s(x) = θ 2 /x. The transformation Π −1 (x) = x − (θ 2 /x) is essentially the Cauchy–Schlömilch transformation (Boros and Moll, 2004, Section 13.2), introduced to statistics by Baker (2008). It, in turn, arises from

π (y) =

1





y

1+  y2 + 4θ 2

2

which is the distribution function of a rescaling of the ubiquitous t distribution with 2 degrees of freedom (Jones, 2002). Remark 2.3. It follows from Theorem 2.1 that S-symmetric random variates can be generated in either of two equivalent ways. Both ways start by generating Yσ from gσ : (i) set XT = Π (Yσ ) and obtain XS from XT using (2.7); (ii) obtain YA from Yσ using (2.9) and set XS = Π (YA ). The following corollary to Theorem 2.1, whose proof is immediate, widens its scope considerably. Corollary 2.1. Let τ : Ω → Ω be an invertible transformation and let XT and XS be as defined in Theorem 2.1; also, define TT = τ (XT ) and

TS = τ (XS ).

Then,

 TT =

TS

τ (s(τ −1 (TS )))

with probability 1/2, with probability 1/2.

(2.10)

Conversely,

 TS |TT =

TT

τ (s(τ

−1

(TT )))

with probability 1/(1 − s′ (τ −1 (TT ))), with probability − s′ (τ −1 (TT ))/(1 − s′ (τ −1 (TT ))).

(2.11)

Remark 2.4. Application of Theorem 2.1 with s(x) = θ 2 /x for x > 0 and g0 = φ , the standard normal density, pertains √ to fR being the root-reciprocal inverse Gaussian distribution, that is, the distribution of 1/ XIG . It is√related in the sense of Theorem 2.1 to the root-reciprocal Birnbaum–Saunders distribution, that is, the distribution of 1/ XBS . It is Corollary 2.1 that leads to the relationship between the inverse Gaussian and Birnbaum–Saunders distributions themselves given in Section 1.1, by additionally taking τ (x) = 1/x2 . Theorem 5 of Section 10.6 of Saunders (2007) and its proof are similar to the development above, but for the inverse Gaussian/Birnbaum–Saunders special case only. 3. More special cases 3.1. Inverse Gaussian type and generalised Birnbaum–Saunders distributions Leading on from Remark 2.4, I have already shown that relationships (1.3) and (1.4) continue to hold precisely as written for g0 being any symmetric distribution on R and not just the normal (which plays no role except for being symmetric). This means that they hold for the generalised Birnbaum–Saunders (GBS) distribution of Díaz-García and Leiva-Sánchez (2005) (see also Sanhueza et al., 2008a) – this is the distribution of 1/XL2 and has density (1.2) with φσ replaced by gσ – and the

M.C. Jones / Statistics and Probability Letters 82 (2012) 1737–1744

1741

inverse Gaussian type (IGT) distribution of Sanhueza et al. (2008b) — this is the distribution of 1/XR2 and has density (1.1) with φσ replaced by gσ . Relationship (1.4) is not written explicitly elsewhere for the IGT and GBS distributions, but it is implicit in the use by Leiva et al. (2008) of essentially the Michael et al. (1976) random variate generation algorithm for the IGT distribution. The R-symmetric (or Cauchy–Schlömilch) distributions are now seen to be the class of ‘root-reciprocal inverse Gaussian type’ distributions! 3.2. Mixtures of inverse Gaussian (type) distributions The special mixture of inverse Gaussian and scaled reciprocal inverse Gaussian distributions defined by extending (1.3) to

 XMIG =

XIG 1/(θ 4 XIG )

with probability 1 − p, with probability p

(3.1)

is the mixed inverse Gaussian (MIG) distribution of Jørgensen et al. (1991); see also Saunders (2007, Sections 10.3 and 10.6) and Balakrishnan et al. (2009). Replacing normality by symmetry results in the generalised mixture inverse Gaussian (GMIG) distribution of Leiva et al. (2010). As noted by Leiva et al. (2010), the first transformation in (1.5) leads back to the distribution with density 1 2

 1 − (1 − 2p) 

y

 gσ (y),

y2 + 4θ 2

y ∈ R.

(3.2)

This is the distribution of YA with probability p and −YA with probability 1 − p, where YA is as in (2.9) for the special case of Remark 2.2. Applying the inverse transformation to YA and −YA gives the following simple converse to (3.1):

 X(G)MIG |X(G)BS =

X(G)BS 1/(θ 4 X(G)BS )

with probability ((1 − p) + pθ 2 X(G)BS )/(1 + θ 2 X(G)BS ), with probability (p + (1 − p)θ 2 X(G)BS )/(1 + θ 2 X(G)BS ).

(3.3)

Alternatively, (3.3) arises by combining (3.1) with (1.4), for general gσ . Either way, this reflects the attractive structure of the (G)MIG distribution that it reduces to the IG(T) distribution when p = 0, the LBIG(T) distribution when p = 1 and the (G)BS distribution when p = 1/2. The latter is because X(G)BS and 1/(θ 4 X(G)BS ) have the same distribution and the probabilities in (3.3) both reduce to 1/2 when p = 1/2. The MIG random variate generation scheme given by (3.3) is:

• generate Zσ from φσ ;  • (from (1.5)) set XBS = (Zσ + Zσ2 + 4θ 2 )2 /(4θ 4 ); • use a further uniform random variable to obtain XMIG from XBS using (3.3). This is an improvement on MIG random variate generation schemes so far used in the literature. In terms of time and numbers of random variates used, this method is essentially the same as the Michael et al. (1976) method for IG random variate generation given by (1.4). It compares favourably with the suggestion of Jørgensen et al. (1991, p. 81) which adds to Michael et al. (1976) IG generation a further uniform random variable and, a proportion p of the time, an additional χ12 random variate; it also shortcuts the method used by Gupta and Kundu (2011, p. 2700) who effectively use (1.4) and (3.1) consecutively, thereby employing an extra uniform random variable than is needed by use of (3.3) directly. Relationship (3.3) can be readily generalised to special mixtures of S-symmetric distributions, of R-symmetric distributions and of the distributions of their transformations. For example, using (2.7), the relationship for a mixture of S-symmetric distributions becomes

 XMS |XT =

XT s(XT )

with probability ((1 − p) − ps′ (XT ))/(1 − s′ (XT )), with probability (p − (1 − p)s′ (XT ))/(1 − s′ (XT )).

(3.4)

3.3. Log-exponential symmetry As the Cauchy–Schlömilch transformation arises from the t2 distribution function (for π ), so the following alternative tractable transformation arises from the logistic distribution function (Jones, 2010):

Π (y) =

1 a

log(1 + eay ),

Π −1 (x) =

1 a

log(eax − 1),

s(x) = −

1 a

log(1 − e−ax )

with a > 0. The analogues of (1.6) and (1.7) are fLE (x) = 2gσ



1 a



log(eax − 1) ,

x>0

(3.5)

1742

M.C. Jones / Statistics and Probability Letters 82 (2012) 1737–1744

and



1

f E ( x) =

gσ 1 − e−ax

1 a



log(eax − 1) ,

x > 0.

(3.6)

The symmetry of fLE is that

 fLE

 1 − log(1 − e−ax ) = fLE (x). a

The central (modal, if unimodal) point in this case is x0 = log(2)/a. The symmetry of fE is that XE and − 1a log(1 − e−aXE ) have the same distribution, with density fE , or equivalently, that e−aXE

1 − e−aXLE

and

have the same distribution. These can be thought of as ‘log-exponential’, or LE-, symmetries. For random variables with these distributions, Theorem 2.1 gives with probability 1/2,

 XE =

XLE 1

− log(1 − e

−aXLE

a

)

(3.7)

with probability 1/2

and with probability 1 − e−aXE ,



XE

XLE |XE =

1

− log(1 − e−aXE )

with probability e−aXE . a Neither distribution (3.5) nor (3.6) has been extensively studied as yet, but they may repay such effort.

(3.8)

3.4. Sinh–arcsinh symmetry The Π function associated with (3.2) is clearly Π (y) = 12 (y − (1 − 2p) y2 + 1) for any 0 < p < 1 with, for x ∈ R, inverse transformation    1 Π −1 (x) = x + (1 − 2p) x2 + p(1 − p) . 2p(1 − p)





Using the fact that, in general, Ax + B x2 + C 2 = K sinh(sinh−1 (Dx) + E ) where D = 1/C , it is the case that Π

E = tanh−1 (B/A),

K =C



A2 − B2 ,

−1

can also be written as    x −1 −1 −1 −1 Π (x) = sinh sinh + tanh (1 − 2p) = ΠSAS (x). √ p(1 − p)



This is a version of the sinh–arcsinh transformation introduced to statistics by Jones and Pewsey (2009); in their notation, it is

   Π −1 (x) = Sϵ(p),1 x/ p(1 − p)

where ϵ(p) = tanh−1 (2p − 1).

Here, the parameter δ in Jones and Pewsey’s (2009) model, which controls tailweight, has been removed by being set equal to 1, but the skewness parameter ϵ = ϵ(p) has been retained, and the transformation has been rescaled. The associated (novel) self-inverse function is 1

  (2p2 − 2p + 1)x + (1 − 2p) x2 + p(1 − p) 2p(1 − p)      x + 2 tanh−1 (1 − 2p) . = − p(1 − p) sinh sinh−1 √ p(1 − p)

sSAS (x) = −



−1 The S-symmetric distribution associated with ΠSAS (by inserting ΠSAS (x) into formula (2.4)) is the ‘transformation of scale’ distribution using transformation t6 given by (7) of Jones (2012); see also Section 3.3.1 of that paper. Write XSA ∼ fSA for this random variable and density. The ‘transformation of random variable’ distribution associated with ΠSAS (by inserting −1 ΠSAS (x) into formula (2.5)) is a (rescaled) sinh–arcsinh density as defined by Jones and Pewsey (2009). Write XSAS ∼ fSAS for this random variable and density. Special cases of the sinh–arcsinh distribution with δ = 1 (and g0 the Student t density) are investigated by Rosco et al. (2011). The symmetry associated with the fSAS distribution is perhaps most simply written as

   2 (1 − p) XSAS + XSAS + p(1 − p) having the same distribution.

and p



2 XSAS + p(1 − p) − XSAS



M.C. Jones / Statistics and Probability Letters 82 (2012) 1737–1744

1743

The analogue of (2.6) for these distributions is obvious. That of (2.7) gives XSA , the random variable put forward as of interest in Jones (2012), in terms of XSAS , a version of the sinh–arcsinh random variable of Jones and Pewsey (2009):

 XSA |XSAS =

XSAS sSAS (XSAS )

with probability pSAS (XSAS ), with probability 1 − pSAS (XSAS )

(3.9)

where



2 2 XSAS + p(1 − p) − (1 − 2p)XSAS

pSAS (XSAS ) =

2 4XSAS +1





2 XSAS + p(1 − p)

.

4. Closing remarks Remark 4.1. At the heart of this paper are the facts that the S-symmetric distributions (2.4) and their transformation counterparts (2.5) are the results of applying the same transformation function Π to two closely related random variables: (2.4) comes from setting XS = Π (Yσ ) where Yσ ∼ gσ and (2.5) comes from setting XT = Π (YA ) where YA ∼ gA ; Yσ and YA are related by (2.9); hence Remark 2.1. Relationships (2.6) and (2.7), and all their special cases, are therefore immediate consequences of applying the transformation Π to a symmetric random variable and to its Azzalini-type skew-symmetric analogue. It also follows that random variate generation for S-symmetric random variables XS can be described without explicit mention of XT (as in Remark 2.3). However, it seems to be informative (as well as equivalent) to make the resulting links between XS and XT explicit in the way I have done here. Remark 4.2. One could also regard the current paper as consisting of extensions and applications, and hence exploration of the generality, of the approach of Michael et al. (1976). Personally, however, I find their (equivalent) viewpoint of transformations with two roots applied to symmetric random variables (as given at (1.8)) less natural than the approach above. Remark 4.3. The GBS′′ distributions of Vilca-Labra and Leiva-Sánchez (2006) consist in applying the Birnbaum–Saunders transformation (1.5) to Azzalini-type distributions (2.8), but without the special link between Π and π . Remark 4.4. Connections are also made between R-symmetry and log-symmetry in Chaubey et al. (2010). Their results are of a different nature, concerning multiplication of general random variables by uniform random variables (as opposed to using uniform variates to choose randomly between other random variables), and require unimodality. Acknowledgements I am grateful to the referees for advantageous minor presentational changes, for useful additional references, and for prompting a little more work on the random variate generation side, as well as their general positiveness. References Azzalini, A., 1985. A class of distributions which includes the normal ones. Scandinavian J. Statist. 12, 171–178. Baker, R., 2008. Probabilistic applications of the Schlömilch transformation. Commun. Statist. Theor. Meth. 37, 2162–2176. Balakrishnan, N., Leiva, V., Sanhueza, A., Cabrera, E., 2009. Mixture inverse Gaussian distributions and its transformations, moments and applications. Statist. 43, 91–104. Birnbaum, Z.W., Saunders, S.C., 1969. A new family of life distributions. J. Appl. Probab. 6, 319–327. Boros, G., Moll, V.H., 2004. Irresistible Integrals; Symbolics, Analysis and Experiments in the Evaluation of Integrals. Cambridge University Press, Cambridge. Chaubey, Y.P., Mudholkar, G.S., Jones, M.C., 2010. Reciprocal symmetry, unimodality and Khintchine’s theorem. Proc. Roy. Soc. Ser. A 466, 2079–2096. Chhikara, R.S., Folks, J.L., 1989. The Inverse Gaussian Distribution; Theory, Methodology, and Applications. Marcel Dekker, New York. Desmond, A.F., 1986. On the relationship between two fatigue-life models. IEEE Trans. Reliab. 35, 167–169. Díaz-García, J.A., Leiva-Sánchez, V., 2005. A new family of life distributions based on the elliptically contoured distributions. J. Statist. Planning Inference 128, 445–457. Gupta, R.C., Kundu, D., 2011. Weighted inverse Gaussian — a versatile lifetime model. J. Appl. Statist. 38, 2695–2708. Johnson, N.L., Kotz, S., Balakrishnan, N., 1994. Continuous Univariate Distributions, second ed., vol. 1. John Wiley, New York. Johnson, N.L., Kotz, S., Balakrishnan, N., 1995. Continuous Univariate Distributions, second ed., vol. 2. John Wiley, New York. Jones, M.C., 2002. Student’s simplest distribution. J. Roy. Statist. Soc. Ser. D 51, 41–49. Jones, M.C., 2007. Connecting distributions with power tails on the real line, the half line and the interval. Internat. Statist. Rev. 75, 58–69. Jones, M.C., 2008. On reciprocal symmetry. J. Statist. Planning Inference 138, 3039–3043. Jones, M.C., 2010. Distributions generated by transformations of scale using an extended Schlömilch transformation. Sankhya Ser. A 72, 359–375. Jones, M.C., 2012. Generating distributions by transformation of scale. Unpublished. Jones, M.C., Pewsey, A., 2009. Sinh-arcsinh distributions. Biometrika 96, 761–780. Jørgensen, B., Seshadri, V., Whitmore, G.A., 1991. On the mixture of the inverse Gaussian distribution with its complementary reciprocal. Scandinavian J. Statist. 18, 77–89. Lawless, J.F., 2003. Statistical Models and Methods for Lifetime Data, second ed., Wiley, Hoboken, NJ. Leiva, V., Hernandez, H., Sanhueza, A., 2008. An R package for a general class of inverse Gaussian distributions. J. Statist. Soft. 26 (4). Leiva, V., Sanhueza, A., Kotz, S., Araneda, N., 2010. A unified mixture model based on the inverse Gaussian distribution. Pak. J. Statist. 26, 445–460.

1744

M.C. Jones / Statistics and Probability Letters 82 (2012) 1737–1744

Marshall, A.W., Olkin, I., 2007. Life Distributions; Structure of Nonparametric, Semiparametric, and Parametric Families. Springer, New York. Michael, J.R., Schucany, W.R., Haas, R.W., 1976. Generating random variates using transformations with multiple roots. Amer. Statist. 30, 88–90. Mudholkar, G.S., Wang, H., 2007. IG-symmetry and R-symmetry: inter-relations and applications to the inverse Gaussian theory. J. Statist. Planning Inference 137, 3655–3671. Rosco, J.F., Jones, M.C., Pewsey, A., 2011. Skew t distributions via the sinh-arcsinh transformation. Test 20, 630–652. Sanhueza, A., Leiva, V., Balakrishnan, N., 2008a. The generalized Birnbaum–Saunders distributions and its theory, methodology, and application. Commun. Statist. Theor. Meth. 37, 645–670. Sanhueza, A., Leiva, V., Balakrishnan, N., 2008b. A new class of inverse Gaussian type distributions. Metrika 68, 31–49. Saunders, S.C., 2007. Reliability, Life Testing, and Prediction of Service Lives; for Engineers and Scientists. Springer, New York. Seshadri, V., 1965. On random variables which have the same distribution as their reciprocals. Canad. Math. Bull. 8, 819–824. Vilca-Labra, F., Leiva-Sánchez, V., 2006. A new fatigue life model based on the family of skew-elliptical distributions. Commun. Statist. Theor. Meth. 35, 229–244. Wang, J.Z., Boyer, J., Genton, M.G., 2004. A skew-symmetric representation of multivariate distributions. Statist. Sinica 14, 1259–1270.