Order statistics from trivariate normal and tν -distributions in terms of generalized skew-normal and skew-tν distributions

Order statistics from trivariate normal and tν -distributions in terms of generalized skew-normal and skew-tν distributions

Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819 Contents lists available at ScienceDirect Journal of Statistical Planning and ...

297KB Sizes 0 Downloads 61 Views

Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

Contents lists available at ScienceDirect

Journal of Statistical Planning and Inference journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / j s p i

Order statistics from trivariate normal and t -distributions in terms of generalized skew-normal and skew-t distributions

A. Jamalizadeha , N. Balakrishnanb, ∗ a b

Department of Statistics, Shahid Bahonar University, Kerman 76169-14111, Iran Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canada L8S 4K1

A R T I C L E

I N F O

A B S T R A C T

Available online 20 May 2009 MSC: 62H05 62H10 62E10 62E15 Keywords: Order statistics Multivariate t -distribution Unified multivariate skew-normal distribution Unified multivariate skew-elliptical distribution Generalized skew-normal distribution Orthant probabilities Moment generating function Mixture distribution Bivariate t -distribution Generalized skew-t distribution

We consider here a generalization of the skew-normal distribution, GSN(1 , 2 , ), defined through a standard bivariate normal distribution with correlation , which is a special case of the unified multivariate skew-normal distribution studied recently by Arellano-Valle and Azzalini [2006. On the unification of families of skew-normal distributions. Scand. J. Statist. 33, 561–574]. We then present some simple and useful properties of this distribution and also derive its moment generating function in an explicit form. Next, we show that distributions of order statistics from the trivariate normal distribution are mixtures of these generalized skew-normal distributions; thence, using the established properties of the generalized skewnormal distribution, we derive the moment generating functions of order statistics, and also present expressions for means and variances of these order statistics. Next, we introduce a generalized skew-t distribution, which is a special case of the unified multivariate skew-elliptical distribution presented by Arellano-Valle and Azzalini [2006. On the unification of families of skew-normal distributions. Scand. J. Statist. 33, 561–574] and is in fact a three-parameter generalization of Azzalini and Capitanio's [2003. Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t distribution. J. Roy. Statist. Soc. Ser. B 65, 367–389] univariate skew-t form. We then use the relationship between the generalized skew-normal and skew-t distributions to discuss some properties of generalized skew-t as well as distributions of order statistics from bivariate and trivariate t distributions. We show that these distributions of order statistics are indeed mixtures of generalized skew-t distributions, and then use this property to derive explicit expressions for means and variances of these order statistics. © 2009 Elsevier B.V. All rights reserved.

1. Introduction A random variable Z is said to have a standard skew-normal distribution with parameter  ∈ R, denoted by Z ∼ SN(), if its probability density function (pdf) is (Azzalini, 1985)

SN (z; ) = 2(z)(z),

z ∈ R,

(1)

where (·) and (·) are the standard normal pdf and cdf, respectively. The corresponding cumulative distribution function (cdf) is denoted by SN (z; ). This distribution has been studied and generalized by several authors including Azzalini (1986), Henze (1986), Azzalini and Dalla Valle (1996), Branco and Dey (2001), Loperfido (2001), Arnold and Beaver (2002), and Azzalini and ∗ Corresponding author. Tel.: +1 905 525 9140x23420; fax: +1 905 522 1676. E-mail addresses: [email protected] (A. Jamalizadeh), [email protected] (N. Balakrishnan). 0378-3758/$ - see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2009.05.018

3800

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

Chiogna (2004). A two-parameter skew-normal distribution with some nice properties was introduced by Arellano-Valle et al. (2004). Balakrishnan (2002), after observing an intricate connection between the standard skew-normal distribution in (1) and order statistics from a sample of size 2 from the standard normal distribution, introduced two more one-parameter skew-normal families. A nice survey of developments on skew-normal distribution and its multivariate form is due to Azzalini (2005). Liseo and Loperfido (2003) gave a Bayesian interpretation of the multivariate skew-normal distribution while Gonzalez-Farias et al. (2004), Arellano-Valle and Genton (2005), and Arellano-Valle and Azzalini (2006) have all discussed various generalizations and multivariate forms of skew-normal distributions. In particular, the last authors have presented the unified multivariate skew-normal distribution as follows. Let U and V be two random vectors of dimensions m and n, respectively, such that       c C DT U , . ∼ Nm+n n D X V Then, the n-variate random vector X is said to have the unified multivariate skew-normal distribution with parameter h = (n, c, X, C, D), where n ∈ Rn , c ∈ Rm , X ∈ Rn×n , C ∈ Rm×m (with X and C being positive definite covariance matrices) and D ∈ Rn×m , denoted by X ∼ SUNn,m (n, c, X, C, D), if d

X = V|U > 0. The density function of X has been shown by Arellano-Valle and Azzalini (2006) to be

SUNn,m (x; h) =

n (x; n, X)m (c + T X−1 (x − n); C − DT X−1 D) , m (c; C) T

−1

x ∈ Rn ,

(2) T

where n (·; n, X) is the pdf of Nn (n, X) and m (·; C − D X D) and m (·; C) are the cdf's of Nm (0, C − D X respectively. The moment generating function (MGF) of X ∼ SUNn,m (h) can be derived from (2) to be

−1

D) and Nm (0, C),

T

MSUNn,m (s; h) =

exp(n s + 12 sT Xs)m (c + T s; C) , m (c; C)

s ∈ Rn .

(3)

In this paper, we consider a three-parameter generalization of the skew-normal distribution, denoted by GSN(1 , 2 , ), through a standard bivariate normal distribution with correlation , which is a special case of the unified multivariate skewnormal distribution in (2). After describing briefly this generalized skew-normal distribution and listing some of its simple properties in Section 2, we derive the moment generating function of GSN(1 , 2 , ) in an explicit form in Section 3. In Section 4, we establish Stein's lemma for the generalized skew-normal distribution and then use it, for example, to derive higher-order moments of this distribution. In Section 5, we discuss the strong unimodality and a useful probabilistic representation of this distribution and also the convolution of two independent Azzalini's skew-normal random variables with pdf in (1). In Section 6, we consider order statistics from the trivariate normal distribution and show that the distributions of order statistics are mixtures of these generalized skew-normal distributions. In Section 7, using these mixture forms and properties of the generalized skewnormal distribution, the moment generating functions of order statistics are derived and explicit expressions for means and variances are also obtained. Next, in Section 8, we introduce a generalized skew-t distribution, which is a special case of the unified multivariate skewelliptical distribution presented by Arellano-Valle and Azzalini (2006). This is, in fact, a three-parameter generalization of Azzalini and Capitanio's (2003) univariate skew-t form, for which recurrence relations for the distribution function have been derived recently by Jamalizadeh et al. (2009). In Section 9, we present expressions for its distribution function and density function in some special cases, and the moments in Section 10. In Section 11, we establish some properties and representations for this generalized skew-t distribution. Finally, in Sections 12 and 13, we discuss the distributions of order statistics from bivariate t and trivariate t -distributions, respectively. We show that these distributions of order statistics are indeed mixtures of generalized skew-t distributions, and then use this property to derive explicit expressions, for example, for means and variances of these order statistics. Recently, Arellano-Valle and Genton (2007, 2008) have derived the exact distributions of the largest-order statistic and linear combinations of order statistics from multivariate normal and multivariate t-distributions in terms of some general expressions involving skew-normal distributions. Jamalizadeh and Balakrishnan (2008) discussed distributions and moments of order statistics from bivariate skew-normal and skew-t distributions in terms of generalized skew-normal and skew-t distributions. In this paper, however, by focusing on the bivariate and trivariate cases, and utilizing results on generalized skew-normal and skew-t distributions, we have successfully derived explicit expressions for distributions, densities, moment generating functions and moments of all order statistics in these cases. It is important to mention here that distributions and moment properties of order statistics from a bivariate normal distribution have received attention from several authors including Gupta and Pillai (1965), Basu and Ghosh (1978), Nagaraja (1982), Balakrishnan (1993), Cain (1994) and Cain and Pan (1995); see also Kotz et al. (2000) and David and Nagaraja (2003). In this respect, the results established here on distributions and moments of order statistics from the trivariate normal distribution, via the generalized skew-normal distribution, are generalizations of these works. Moreover, the results established here on distributions and moments of order statistics from bivariate and trivariate t -distributions, via the generalized skew-t distribution, are new and interesting.

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3801

2. Definition and some simple properties In this section, we consider a special case of the unified multivariate skew-normal distribution in (2). If Z1 ,2 , is a random variable such that d

Z1 ,2 , = X|(Y1 < 1 X, Y2 < 2 X),

1 , 2 ∈ R, || < 1,

(4)

where X ∼ N(0, 1) independent of (Y1 , Y2 )T ∼ N2 (0, 0, 1, 1, ) (standard bivariate normal distribution with correlation coefficient ), then       2  + 1 2 1 + 1 0 , (1 , 2 ) . Z1 ,2 , ∼ SUN1,2 0, , 1, 2 0  + 1 2 1 + 2 From the general form of the density function of the SUN model in (2), we have the pdf of Z1 ,2 , ∼ GSN(1 , 2 , ), defined in (4), as

GSN (z; 1 , 2 , ) = c(1 , 2 , )(z)2 (1 z, 2 z; ),

z ∈ R,

(5)

with 1 , 2 ∈ R, || < 1, and 2 (·, ·; ) denoting the cdf of N2 (0, 0, 1, 1, ). The corresponding cdf is denoted by GSN (z; 1 , 2 , ). For determining c(1 , 2 , ) in (5), we note that c(1 , 2 , ) =

1 , a(1 , 2 , )

where a(1 , 2 , ) = P(Y1 < 1 X, Y2 < 2 X)

(6)

with X ∼ N(0, 1) independent of (Y1 , Y2 ) ∼ N2 (0, 0, 1, 1, ), as before. We then have the following lemma. T

Lemma 1. We have

⎛ ⎞ 1 −1 ⎝ −( + 1 2 ) ⎠   . a(1 , 2 , ) = cos 2 2 2 1 + 1 1 + 2

(7)

Proof. Since a(1 , 2 , ) = P(Y1 − 1 X < 0, Y2 − 2 X < 0), using the orthant probability expression for ⎛

⎞T ⎛ ⎞ ⎝ Y1 − 1 X , Y2 − 2 X ⎠ ∼ N2 ⎝0, 0, 1, 1,   + 1 2 ⎠ 2 2 2 2 1 + 1 1 + 2 1 + 1 1 + 2 (see, for example, Kotz et al., 2000), we obtain the expression in (7).



Using the expression of a(1 , 2 , ) in (7), we obtain c(1 , 2 , ) as c(1 , 2 , ) =

⎛ cos−1 ⎝ 

2

⎞. −( + 1 2 ) ⎠  2 2 1 + 1 1 + 2

(8)

Thus, the generalized skew-normal density function in (5) becomes

GSN (z; 1 , 2 , ) =

⎛ cos−1

2

1 2 ) ⎝  −( + 2

⎞ (z)2 (1 z, 2 z; ),

z, 1 , 2 ∈ R, || < 1.

(9)

⎠ 2

1 + 1 1 + 2

In particular, if Z ∼ GSN(, 0, ) = GSN(0, , ), then the density of Z is  − ∗ ⎛ ⎞ (z)SN z;

GSN (z; , ) = , z ∈ R,  ∈ R, −1 <  < 1, 1 − 2 − ⎠ −1 ⎝  cos 2 1+ where SN (·; ) denotes the cdf of Azzalini's skew-normal SN( ) distribution.

(10)

3802

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

The following are some simple properties of the generalized skew-normal density in (9): d

(1) GSN(0, 0, 0) = N(0, 1) or Z0,0,0 = N(0, 1); (2) GSN(, 0, 0) = GSN(0, , 0) = SN(); d

(3) GSN(0, 0, ) = N(0, 1) or Z0,0, = N(0, 1); (4) Z1 ,2 , ∼ GSN(1 , 2 , ) ⇔ −Z ∼ GSN(−1 , −2 , ); (5) GSN(1 , 2 , ) = GSN(2 , 1 , ). 3. Moment generating function From the expression of the moment generating function of the unified multivariate skew-normal distribution in (3), it can be easily shown that the MGF of Z1 ,2 , ∼ GSN(1 , 2 , ) is given by ⎛ s2 /2

MGSN (s; 1 , 2 , ) = c(1 , 2 , )e

2 ⎝ 

1 s 2

1 + 1

,

2 s 2

1 + 2

;

 + 1 2



⎠,  2 2 1 + 1 1 + 2

s ∈ R,

(11)

where c(1 , 2 , ) is as in (8), and 2 (·, ·; ) is the cdf of N2 (0, 0, 1, 1, ). A direct proof can also be given from the density function in (9). For facilitating the computation of derivatives of the MGF in (11) required for the derivation of moments, we present the following lemma. Lemma 2. For x ∈ R and | | < 1, we have ⎧ 1 ⎪ cos−1 (− ), ⎪ ⎪ 2 ⎛ ⎪ ⎞ ⎪ ⎪ ⎪ ⎪ 1

− ⎪ ⎪ ⎠, ⎪ ⎪ 2 SN ⎝ 1 x;  ⎨ 2 1 −

2 ( 1 x, 2 x; ) = ⎛ ⎞ ⎪ ⎪ ⎪ ⎪

− 1 ⎪ ⎠, ⎪ ⎪ SN ⎝ 2 x;  ⎪ 2 ⎪ 2 ⎪ 1 −

⎪ ⎪ ⎩ 1 2 {SN ( 1 x; 1 ) + SN ( 2 x; 2 ) − I{( 1 , 2 )| 1 2 <0} ( 1 , 2 )},

1 = 0, 2 = 0, 2 = 0, (12)

1 = 0, 1  0, 2  0,

where SN (·; ) denotes the cdf of SN( ), IA ( 1 , 2 ) =



1 if ( 1 , 2 ) ∈ A, 0 if ( 1 , 2 ) ∈/ A

and

1 = 



1 1−

2

2 −

1



and 2 = 



1 1−

2



1 − . 2

(13)

Proof. While the expression above for the case when 1 = 2 = 0 is simply the orthant probability, the expressions for the cases when 1 = 0 and 2 = 0 follow easily. So, we concentrate here on the case when both 1 and 2 are non-zero. If 1 > 0, 2 > 0, consider (U1∗ , U2∗ )T ∼ N2 (0, 0, 1/ 21 , 1/ 22 , / 1 2 ). Then,

2 ( 1 x, 2 x; ) = P(U1∗ ⱕ x, U2∗ ⱕ x) = 12 {P(U1∗ ⱕ x|U2∗ < U1∗ ) + P(U2∗ ⱕ x|U1∗ < U2∗ )}. Now ⎧ ⎨

 ⎫  ⎬ ∗ − U∗ )  (U 1 2 1 2 P(U1∗ ⱕ x|U2∗ < U1∗ ) = P 1 U1∗ ⱕ 1 x   >0 ⎩ ⎭  2 + 2 − 2 1 2 1

2

= P(V1∗ ⱕ 1 x|V2∗ > 0), where V1∗ = 1 U1∗ ,

V2∗ = 

1 2 (U1∗ − U2∗ ) 21 + 22 − 2 1 2

,

(V1∗ , V2∗ )T ∼ N2 (0, 0, 1, 1, )

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3803

 with = ( 2 − 1 )/ 21 + 22 − 2 1 2 . But, it is known that (see, for example, Arnold and Beaver, 2002) ⎛

V1∗ |V2∗



> 0 ∼ SN ⎝ 

2

⎠.

(14)

1−

Upon substituting for in (14) and simplifying, we obtain P(U1∗ ⱕ x|U2∗ < U1∗ ) = SN ( 1 x; 1 ), where 1 is as given in (13). Similarly, we can show that P(U2∗ ⱕ x|U1∗ < U2∗ ) = SN ( 2 x; 2 ). Next, if 1 < 0, 2 < 0, we simply note that

2 ( 1 x, 2 x; ) = 2 ((− 1 )(−x), (− 2 )(−x); ) and, since − 1 > 0, − 2 > 0, we obtain (12). Finally, if 1 > 0, 2 < 0, we have

2 ( 1 x, 2 x; ) = ( 1 x) − 2 ( 1 x, − 2 x; − ) = ( 1 x) − 12 {SN ( 1 x; − 1 ) + SN (− 2 x; − 2 )}. Upon using the facts that (u) = 12 {SN (u; ) + SN (u; − )} and SN (−u; − ) = 1 − SN (u; ), we obtain (12). Hence, the lemma is proved.  From Eq. (11), for s ∈ R, we readily obtain the following: MGSN (s; 0, 0, 0) = es

2 /2

,

⎛ ⎞ 1 s 1 − s2 /2 ⎝ ⎠, MGSN (s; 1 , 0, ) = c(1 , 0, )e SN  ; 2 2 2 1 + 1 1 − 2 + 1 ⎛ ⎞ 2 s 1 − s2 /2 ⎝ ⎠, MGSN (s; 0, 2 , ) = c(0, 2 , )e SN  ; 2 2 2 1 + 2 1 − 2 + 2 ⎫ ⎧ ⎛ ⎞ ⎛ ⎞ ⎬ ⎨  s  s 1 2 1 2 MGSN (s; 1 , 2 , ) = c(1 , 2 , )es /2 SN ⎝  ; ∗1 ⎠ + SN ⎝  ; ∗2 ⎠ − I{(1 ,2 )|1 2 <0} (1 , 2 ) ⎭ ⎩ 2 2 2 1 + 1 1 + 2 when 1  0, 2  0,

where

∗1 = 

2 − 1 2

2

1 − 2 + 1 + 2 − 21 2

,

∗2 = 

1 − 2 2

2

1 − 2 + 1 + 2 − 21 2

.

We can readily obtain the moments of Z1 ,2 , from the derivatives of the expression of the MGF given above. For example, we obtain ⎧ ⎫ 2 ⎬ c(1 , 2 , ) ⎨ 1  , (15) + E[Z1 ,2 , ] = √ 2⎭ 2 2 ⎩ 1 +  2 1 + 2 1   1 (2 − 1 ) 2 (1 − 2 ) c(1 , 2 , ) 2  E[Z , , ] = 1 + + , 2 2 1 2 2 2 1 + 1 1 + 2 2 1 − 2 + 1 + 2 − 21 2   1 (2 − 1 ) 2 (1 − 2 ) c(1 , 2 , )  + (16) Var(Z1 ,2 , ) = 1 + − E2 [Z1 ,2 , ]. 2 2 2 2 1 + 1 1 + 2 2 1 − 2 + 1 + 2 − 21 2

3804

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

In the special case when 1 = 2 = , the expressions in (15) and (16) reduce to E[Z,, ] =

c(, , )   , √ 2 2 1+

Var(Z,, ) = 1 +

(17)

c(, , )



(1 − )

2

2  1 − 2 + 22 (1 − ) 1 + 

− E2 [Z,, ].

(18)

4. Stein's lemma for generalized skew-normal distribution Stein's identity states that if Z0 ∼ N(0, 1) and g is a real-valued function such that E|g (Z0 )| < ∞, then E[Z0 g(Z0 )] = E[g (Z0 )].

(19)

Recently, Adcock (2007) discussed Stein's lemma for skew-normal distributions. In this section, we present Stein's lemma for generalized skew-normal distribution which is indeed a generalization of (19) and a special case of Adcock (2007). Lemma 3. If Z1 ,2 , ∼ GSN(1 , 2 , ) and g is a real-valued function such that E|g (Z1 ,2 , )| < ∞, then ⎧ ⎡ ⎛ ⎡ ⎛ ⎞⎤ ⎞⎤⎫ ⎬ Z Z  c(1 , 2 , ) ⎨ 1   2 1 2 ⎠⎦ +  ⎠⎦ ,  E ⎣g ⎝  E ⎣g ⎝  E[Z1 ,2 , g(Z1 ,2 , )] = E[g (Z1 ,2 , )] + √ ⎭ ⎩ 2 2 2 2 2 2 1 + 1 1 + 1 1 + 2 1 + 2 

(20) where Z ∼ SN(), and

1 = 

2 − 1 

1 − 2  and 2 =  . 2 (1 − 2 )(1 + 2 )

2 (1 − 2 )(1 + 1 )

Proof. From Eq. (12), by integration by parts, we obtain  E[Z1 ,2 , g(Z1 ,2 , )] = E[g (Z1 ,2 , )] + c(1 , 2 , ) 1 





∞ −∞

g(z)(z)(1 z)



z(2 − 1 )

1 − 2

dz

z(1 − 2 )

dz 1 − 2 −∞ ⎧ ⎛ ⎞  ∞ z c(1 , 2 , ) ⎨ 1  ⎠ SN (z; 1 ) dz  g⎝ = E[g (Z1 ,2 , )] + √ 2 2 2 ⎩ 1 + 2 −∞ 1 + 1 1 ⎫ ⎛ ⎞  ∞ ⎬ z 2 ⎠ SN (z; 2 ) dz , g⎝ + ⎭ 2 2 1 + 2 1 + 2 −∞ +2



g(z)(z)(2 z)

where SN (z; ) is as given in (1). Hence, the lemma is proved.



Remark 1. In the special case when g is an even function, then the above Stein's lemma reduces to ⎧ ⎡ ⎛ ⎡ ⎛ ⎞⎤ ⎞⎤⎫ ⎬ Z0 Z0 2 c(1 , 2 , ) ⎨ 1 ⎝ ⎝ ⎣ ⎣ ⎠ ⎦ ⎠⎦ ,  + E g  E g  E[Z1 ,2 , g(Z1 ,2 , )] = E[g (Z1 ,2 , )] + √ ⎩ ⎭ 2 2 2 2 2 2 1 + 1 1 + 1 1 + 2 1 + 2 

where Z0 ∼ N(0, 1), due to the fact that E[h(Z )] = E[h(Z0 )] for any even function h. By using Stein's lemma in (20), we can easily derive expressions for higher-order moments of Z1 ,2 , ∼ GSN(1 , 2 , ). For example, we obtain an expression for the odd-order moments as follows:

]= E[Z2k+1 1 ,2 ,

⎫ ⎧ k k ⎬   1 m!(21 )2m 2 m!(22 )2m c(1 , 2 , ) (2k + 1)! ⎨ + √ 2 ⎩ (1 + 2 )k+1/2 (2m + 1)!(k − m)! 2k 2 2 (1 + 2 )k+1/2 m=0 (2m + 1)!(k − m)! ⎭ 1 m=0

(21)

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3805

for k = 0, 1, . . .. Of course, in the special case when 1 = , 2 =  = 0, i.e., GSN(, 0, 0) = SN(), the above formula readily reduces to (see Azzalini, 2005) √ 2k+1

E[Z

]= √

k 

2(2k + 1)! 2 k+1/2

2k (1 +  )

m=0

m!(2)2m . (2m + 1)!(k − m)!

Similar expressions can also be derived for the even-order moments. 5. Strong unimodality, representation and convolution In this section, we establish that GSN(1 , 2 , ) has a strongly unimodal density and then present a representation result. From here on, we shall adopt the notation

23.1 = 

23 − 12 13

1 − 212



1 − 213

,

13.2 = 

13 − 12 23 1 − 212



1 − 223

,

12.3 = 

12 − 13 23

 1 − 213 1 − 223

for the partial correlation coefficients. Theorem 1. The density function GSN (z; 1 , 2 , ) is strongly unimodal. Proof. We know that a nondegenerate distribution F is strongly unimodal (that is, the convolution of F with any unimodal distribution function is unimodal) if and only if it has a logconcave density function f (see Karlin, 1968). To prove that log GSN (z; 1 , 2 , ) is a concave function of z, it is enough to show that the second derivative of log GSN (z; 1 , 2 , ) is negative for all z ∈ R. Note that for differentiating log GSN (z; 1 , 2 , ) with respect to z, it may be more convenient to use Eq. (12).  Theorem 2 (Representation theorem). If (W1 , W2 , W3 )T ∼ N3 (0, R), where ⎛

1

R = ⎝ 12

13



12 13 1 23 ⎠ 23 1

is a positive definite correlation matrix, then W1 | min(W2 , W3 ) > 0 ∼ GSN(1 , 2 , ),

(22)

where

1 = 

12 1 − 212

,

2 = 

13

1 − 213

and  = 23.1 .

(23)

Proof. Since R is a positive definite matrix and 23.1 is a partial correlation coefficient, it is evident that |23.1 | < 1. Now, let us consider (Y1 , Y2 )T ∼ N2 (0, 0, 1, 1, 23.1 ) to be independent of X ∼ N(0, 1). Then, it is easy to show that d

(W1 , W2 , W3 )T =(X, U1 , U2 )T , where U1 = 12 X −

 1 − 212 Y1

and

U2 = 13 X −



1 − 213 Y2

and so d

d

W1 | min(W2 , W3 ) > 0 = X| min(U1 , U2 ) > 0 = X|U1 > 0, U2 > 0      d = X Y1 <  12 X, Y2 <  13 X.  1 − 213 1 − 212 By the use of the relationship in Eq. (4), the result in (22) follows.



3806

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

Remark √2. Suppose Z1 ∼ SN(1 ) and Z2 ∼ SN(2 ) are independent random variables. Then, we observe that the MGF of U = (1/ 2)(Z1 + Z2 ) is, for s ∈ R, ⎛ s2 /2

MU (s; 1 , 2 ) = E(e ) = 4e sU

⎝ ⎛

1 s 2 + 221





⎠⎝

2 s 2 + 222

⎞ ⎠

⎞ 1 2 1 2 ⎝ ⎠.  = MGSN s;  , ,− 2 + 21 2 + 22 2 + 21 2 + 22 Thence, we simply have ⎛ U ∼ GSN ⎝ 

1 2 + 21

,

2 2 + 22

,−

1 2



⎠.  2 + 21 2 + 22

It should be mentioned here that if (Z1 , Z2 )T has a bivariate skew-normal distribution presented by Azzalini and Dalla Valle (1996), then Z1 + Z2 has Azzalini's skew-normal distribution. 6. Order statistics from the trivariate normal distribution In this section, we study distributions of order statistics from the trivariate normal distribution in terms of GSN distributions. For this purpose, let us assume that (W1 , W2 , W3 )T ∼ N3 (0, R), where ⎛



21 12 1 2 13 1 3 R = ⎝ 12 1 2 22 23 2 3 ⎠ 13 1 3 23 2 3 23 is a positive definite matrix. Let W1:3 = min(W1 , W2 , W3 ) < W2:3 < W3:3 = max(W1 , W2 , W3 ) denote the order statistics from (W1 , W2 , W3 )T , and F(i) (t; R) denotes the cdf of Wi:3 for i = 1, 2, 3. Theorem 3. The cdf of W3:3 is given by F(3) (t; R) = a(h1 )GSN



t

1

     t t ; h1 + a(h2 )GSN ; h2 + a(h3 )GSN ; h3 ,

2

3

t ∈ R,

(24)

where GSN (·; h) denotes the cdf of GSN(h), a(h) is as given in (7), and ⎛ 1 ⎞T 1 − 13 − 12  2 ⎟ h1 = ⎜ , 3 , 23.1 ⎠ , ⎝ 1 − 213 1 − 212 ⎛ 2 ⎞T 2 − 23 − 12  1 ⎟ h2 = ⎜ , 3 , 13.2 ⎠ , ⎝ 1 − 223 1 − 212 ⎛ 3 ⎞T 3 − 13 − 23 1  ⎟ h3 = ⎜ , 2 , 12.3 ⎠ . ⎝ 1 − 213 1 − 223

(25)

Proof. First of all, we can write F(3) (t; R) = P[W3:3 ⱕ t] = P[W1 ⱕ t, W2 ⱕ W1 , W3 ⱕ W1 ] + P[W2 ⱕ t, W1 ⱕ W2 , W3 ⱕ W2 ] + P[W3 ⱕ t, W1 ⱕ W3 , W2 ⱕ W3 ].

(26)

Let us consider the first term on the RHS of (26) and write it as P[W1 ⱕ t, W2 ⱕ W1 , W3 ⱕ W1 ] = P[W2 ⱕ W1 , W3 ⱕ W1 ]P[W1 ⱕ t|W2 ⱕ W1 , W3 ⱕ W1 ]. Now, P[W2 ⱕ W1 , W3 ⱕ W1 ] = P[2 U1 ⱕ 1 X, 3 U2 ⱕ 1 X],

(27)

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3807

where X, (Y1 , Y2 )T and (U1 , U2 )T are as defined in Theorem 5. Therefore, ⎡

⎤ 1 1 − 13 − 12 3 2 ⎢ ⎥ P[W2 ⱕ W1 , W3 ⱕ W1 ] = P ⎣−Y1 ⱕ  X, −Y2 ⱕ  X ⎦ = a(h1 ) 1 − 213 1 − 212

(28)

by Eq. (6), since (−Y1 , −Y2 )T ∼ N2 (0, 0, 1, 1, 23.1 ). Next,  ⎤ 1 1  − 13 − 12 t  3 2 ⎢ ⎥ P[W1 ⱕ t|W2 ⱕ W1 , W3 ⱕ W1 ] = P ⎣X ⱕ X, −Y2 ⱕ  X⎦ −Y ⱕ  1  1 1 − 213 1 − 212   t = GSN ; h1 ⎡

1

(29)

by Eq. (4). Substituting the expressions in (28) and (29) into Eq. (27), we obtain P[W1 ⱕ t, W2 ⱕ W1 , W3 ⱕ W1 ] = a(h1 )GSN



t

1

 ; h1 .

In a similar manner, we obtain  ; h2 ,   2  t P[W3 ⱕ t, W1 ⱕ W3 , W2 ⱕ W3 ] = a(h3 )GSN ; h3 , P[W2 ⱕ t, W1 ⱕ W2 , W3 ⱕ W2 ] = a(h2 )GSN



t

3



which completes the proof of the theorem. Theorem 4. The cdf of W2:3 is given by

     t t ; d1 + a(d2 )GSN ; d2 + a(d3 )GSN ; d3 1 1 2       t t t + a(d4 )GSN ; d4 + a(d5 )GSN ; d5 + a(d6 )GSN ; d6 ,

F(2) (t; R) = a(d1 )GSN



t

2

3

3

t ∈ R,

(30)

where GSN (·; d) denotes the cdf of GSN(d), a(d) is as given in (7), and ⎛

!

"



T 1 − 13 − 13 2 − 12 ⎝ ⎠ d1 =  ,  , −23.1 ,

1 − 213 1 − 212 ! " ⎛ ⎞T 1 − 12 − 12 3 − 13 ⎝  d2 = , , −23.1 ⎠ , 1 − 213 1 − 212 ! " ⎛ ⎞T 2 − 23 − 23 1 − 12 ⎝ d3 =  ,  , −13.2 ⎠ , 1 − 223 1 − 212 " ⎛ ! ⎞T 2 − 21 − 12 − 23  3 d4 = ⎝  , , −13.2 ⎠ , 1 − 223 1 − 212 " ! ⎛ ⎞T  3 − 13 − 32 − 23  1 d5 = ⎝  ,  , −12.3 ⎠ , 1 − 213 1 − 223 ⎛ ! ⎞T " 3 − 31 − 13  − 23 ⎟ d6 = ⎜ , 2 , −12.3 ⎠ . ⎝  1 − 213 1 − 223

(31)

3808

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

Proof. First of all, we can write F(2) (t; R) = P[W2:3 ⱕ t] = P[W1 ⱕ t, W2 ⱕ W1 ⱕ W3 ] + P[W1 ⱕ t, W3 ⱕ W1 ⱕ W2 ] + P[W2 ⱕ t, W1 ⱕ W2 ⱕ W3 ] + P[W2 ⱕ t, W3 ⱕ W2 ⱕ W1 ] + P[W3 ⱕ t, W1 ⱕ W3 ⱕ W2 ] + P[W3 ⱕ t, W2 ⱕ W3 ⱕ W1 ]. After writing, for example, P[W1 ⱕ t, W2 ⱕ W1 ⱕ W3 ] = P[W2 ⱕ W1 ⱕ W3 ]P[W1 ⱕ t|W2 ⱕ W1 ⱕ W3 ] and then proceeding as done in Theorem 3, it can be shown that P[W1 ⱕ t, W2 ⱕ W1 ⱕ W3 ] = a(d1 )GSN



t

1

 ; d1

which is the first term on the RHS of Eq. (30). Similarly, all other terms can be derived, which completes the proof of the theorem.  Remark 3. The distribution of the smallest order statistic, W1:3 , can be readily obtained from the distribution of W3:3 presented d

in Theorem 3 by using the fact that W1:3 = −W3:3 . Remark 4. Upon differentiating the expressions of the distribution functions presented in Eqs. (24) and (30), we can readily obtain the density functions of W3:3 and W2:3 , respectively. For example, we obtain from Eq. (24) the pdf of W3:3 as f(3) (t; R) =

1

1

a(h1 )GSN



t

1

     1 t 1 t ; h1 + a(h2 )GSN ; h2 + a(h3 )GSN ; h3 ,

2

2

3

3

t ∈ R.

Remark 5. In the special case when Wi are standardized and equicorrelated, i.e., when 1 = 2 = 3 = 1 and 12 = 13 = 23 = ∗ , where − 12 < ∗ < 1, it follows that # W1:3 ∼ GSN − # W2:3 ∼ GSN # W3:3 ∼ GSN

# 1 − ∗ 1 − ∗ ∗ , − , , 1 + ∗ 1 + ∗ 1 + ∗

# 1 − ∗ 1 − ∗ −∗ ,− , , 1 + ∗ 1 + ∗ 1 + ∗ 1 − ∗ , 1 + ∗

#

1 − ∗ ∗ , ∗ 1 +  1 + ∗

.

In particular, when ∗ = 0, or equivalently, when W1 , W2 , W3 are i.i.d. N(0, 1), then W1:3 ∼ GSN(−1, −1, 0),

W2:3 ∼ GSN(1, −1, 0),

W3:3 ∼ GSN(1, 1, 0),

a well-known distributional result on order statistics from N(0, 1). The above distributional results, when combined with Theorem 1, readily imply that the order statistics from a trivariate exchangeable normal distribution are strongly unimodal, which incidentally generealizes the corresponding result for the i.i.d. case due to Huang and Ghosh (1982). Remark 6. In general, if k ∈ Rk and K ∈ Rk×k is a positive definite correlation matrix, then we can define Zk,K ∼ GSN(k, K) as d

Zk,K = X|(Y < kX) where X ∼ N(0, 1) independently of Y ∼ Nk (0, K). We can then show that the distributions of order statistics Wi:k+1 (1 ⱕ i ⱕ k + 1) from the multivariate normal vector W ∼ Nk+1 (0, R), with R being a positive definite covariance matrix, are indeed mixtures of GSN(k, K) for suitable choices of k and K.

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3809

7. MGFs of order statistics from the trivariate normal distribution Using the MGF of GSN(h) presented in Section 3, we can derive the MGFs of order statistics W1:3 , W2:3 , W3:3 as given in the following theorem. For simplicity, we present the results in the standard case, i.e., when 1 = 2 = 3 = 1. Theorem 5. The MGFs of W1:3 , W2:3 , W3:3 are, for s ∈ R,  M(1) (s; R) = es

2 /2

$

SN − 

$



$

$ $  1 − 12 1 − 13 1 − 23 s; 1 + SN − s; 2 + SN − s; 3 , 2 2 2

$ $ 1 − 12 1 − 12 1 − 13 SN s; −1 + SN − s; −1 + SN s; −2 M(2) (s; R) = e 2 2 2 $ $ $  1 − 13 1 − 23 1 − 23 s; −2 + SN s; −3 + SN − s; −3 − 3 , +SN − 2 2 2 s2 /2

s2 /2

M(3) (s; R) = e

SN

$ $  1 − 12 1 − 13 1 − 23 s; 1 + SN s; 2 + SN s; 3 , 2 2 2

respectively, where 1 + 12 − 13 − 23 1 + 13 − 12 − 23 1 + 23 − 12 − 13 , 2 = , 3 = , √ √ √ A A A A = 6 − {(1 + 12 )2 + (1 + 13 )2 + (1 + 23 )2 } + 2(12 13 + 12 23 + 13 23 ).

1 =

Corollary 1. From the MGFs presented in Theorem 5, we can readily obtain the means and variances of order statistics as follows:   1  E(W3:3 ) = −E(W1:3 ) = √ { 1 − 12 + 1 − 13 + 1 − 23 } 2 

and E(W2:3 ) = 0

and √ Var(W3:3 ) = Var(W1:3 ) = 1 +

A

2

√ − E2 (W3:3 )

and

Var(W2:3 ) = 1 −

A



,

where A is as defined above in Theorem 5. Corollary 2. In the special case when Wi are equicorrelated, i.e., when 12 = 13 = 23 = ∗ , where − 12 < ∗ < 1, the MGFs of W1:3 , W2:3 , W3:3 in Theorem 5 reduce to, for s ∈ R, 1 − ∗ 1 , s; √ M(1) (s;  ) = 3e SN − 2 3   $ $ 1 − ∗ 1 − ∗ 1 1 ∗ s2 /2 SN M(2) (s;  ) = 3e + SN − −1 , s; − √ s; − √ 2 2 3 3 $ 1 − ∗ 1 2 M(3) (s; ∗ ) = 3es /2 SN , s; √ 2 3 $



s2 /2

respectively. Furthermore, the expressions for the means and variances presented in Corollary 1 reduce in this case to 3

E(W3:3 ) = −E(W1:3 ) = √ 1 − ∗ , 2 

E(W2:3 ) = 0

and Var(W3:3 ) = Var(W1:3 ) = 1 −

√ 9−2 3 (1 − ∗ ), 4

√ Var(W2:3 ) = 1 −

3



(1 − ∗ ).

3810

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

By using the mixture representations in Theorems 3 and 4 along with Stein's formula in (21), we can derive the higher-order moments of these order statistics as well. For example, we obtain, for k = 0, 1, . . . , 2k+1 ] = 0, E[W2:3 2k+1 ]= E[W3:3

⎧   k   m!22m (2k + 1)! ⎨ 1 − 12 m k √ 2k+1 (1 + 12 ) 1 − 12 ⎩ (2m + 1)!(k − m)! 1 + 12 2 m=0 k   +(1 + 13 )k 1 − 13 m=0

+(1 + 23 )

k



1 − 23

k  m=0

  1 − 13 m m!22m (2m + 1)!(k − m)! 1 + 13

⎫   1 − 23 m ⎬ m!22m , ⎭ (2m + 1)!(k − m)! 1 + 23

2k+1 2k+1 ] = −E[W3:3 ]. E[W1:3

Of course, in the special case when 12 = 13 = 23 = ∗ , these expressions readily simplify to 2k+1 E[W2:3 ] = 0, 2k+1 2k+1 ] = −E[W1:3 ]= E[W3:3

  k 3(2k + 1)!(1 + ∗ )k 1 − ∗  1 − ∗ m m!22m √ 2k+1 ∗ (2m + 1)!(k − m)! 1 +  2 m=0

which, in turn, reduce to the following expressions in the special case when ∗ = 0: 2k+1 E[W2:3 ] = 0, k m!22m 3(2k + 1)!  2k+1 2k+1 . ] = −E[W1:3 ]= √ E[W3:3 22k+1 m=0 (2m + 1)!(k − m)!

It is important to mention here that order statistics from equicorrelated normal variables have been discussed earlier by Gupta (1963), Gupta et al. (1973), Young (1967) and David and Joshi (1968). The results on distributions and moments obtained here agree with those obtained by these authors for the trivariate case. Remark 7. The MGFs of order statistics from a trivariate normal distribution presented in Theorem 5 can also be used to derive expressions for moments of order statistics from a trivariate lognormal distribution. Specifically, if we take (ln V1 , ln V2 , ln V3 )T = (W1 , W2 , W3 )T ∼ N3 (0, R) so that (V1 , V2 , V3 )T has a trivariate log-normal distribution, then we readily have s E[Vi:3 ] = E[es ln Vi:3 ] = E[esW i:3 ] = M(i) (s; R)

for i = 1, 2, 3,

where M(i) (s; R) is as given in Theorem 5. Lien (1986) derived direct expressions for moments of order statistics from a bivariate lognormal distribution, and the present approach through the generalized skew-normal distribution therefore presents another method for this derivation. 8. Definition of generalized skew-tm distribution An n-dimensional random vector X is said to have an elliptically contoured distribution with location vector n ∈ Rn , nonnegative definite dispersion matrix X ∈ Rn×n , and characteristic generator , if the centered random vector X−n has characteristic function of the form  (s) = (sT Xs) for s ∈ Rn ; see Cambanis et al. (1981) for the most general definition of this family. In this X−n

case, we write X ∼ EC n (n, X, ), and its cdf will be denoted by FEC n (x; n, X, ). It is well-known that this family of distributions is closed under linear transformation, marginalization, and conditioning. Specifically, if X ∼ EC n (n, X, ) with X being of full rank −1/2 n, then X (X − n) ∼ EC n (0, In , ), where In ∈ Rn is the identity matrix. Moreover, if the pdf of X exists, it is of the form −1 fEC n (x; n, X,% f (n) ) = |X|−1/2% f (n) ((x − n)T X (x − n)),

x ∈ Rn ,

(32)

where % f (n) is the density generator function; see also Fang et al. (1990). In this case, u can be replaced by % f (n) in the above notation f (n) ). In addition, if X, n and X are partitioned as and use X ∼ EC n (n, X,%       X1 n1 X11 X12 X= , n= , X= , X2 n2 X21 X22 where X1 and n1 are n1 × 1 (n1 ⱕ n) vectors, and X11 is an n1 × n1 positive definite matrix, then X1 ∼ EC n1 (n1 , X11 ,% f (n1 ) ), (n−n ) −1 −1 fw(x )1 ), X2 |(X1 = x1 ) ∼ EC n−n1 (n2 + X21 X11 (x1 − n1 ), X22 − X21 X11 X12 ,% 1

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

where w(x1 ) = (x1 − n1 )T X11 (x1 − n1 ), and % f (n1 ) and % fa −1

% f (n1 ) (u) =

(n−n1 )/2 

n − n1 C 2

 

0

can be expressed in terms of % f (n) as

(n−n1 )

+∞

x(n−n1 )/2−1% f (n) (u + x) dx,

3811

uⱖ0

and % f (n) (u + a) (n−n ) % fa 1 (u) = , % f (n1 ) (a)

u, a ⱖ 0.

Two most important elliptical distributions are the multivariate normal and t distributions. Specifically, if the generator function in (61) is % f (n) (u) = e−u/2 /(2)n/2 , u ⱖ 0, we get the usual multivariate normal distribution, denoted by X ∼ Nn (n, X), with density function   1 n (x; n, X) = (2)−n/2 |X|−12 exp − (x − n)T X−1 (x − n) , x ∈ Rn . 2 Similarly, if for a  > 0 (degrees of freedom), the generator function is   +n   C u −(+n)/2 2 % , u ⱖ 0, 1+ f (n) (u) = !  "  C ()n/2 2 we get the usual multivariate t distribution, denoted by X ∼ tn (n, X, ), with density function   +n −(+n)/2 C −1 (x − n)T X (x − n) 2 gn (x; n, X, ) = !  " , x ∈ Rn . 1+ n/2  1/2 C () |X| 2

(33)

Lemma 4. If X = (X1 , . . . , Xn )T ∼ tn (0, R, ), then d

(X1 , . . . , Xn )T = V −1/2 (W1 , . . . , Wn )T

or

d

X = V −1/2 W,

where W ∼ Nn (0, R) independently of V ∼ 2 /  (2 denoting the chi-square distribution with  degrees of freedom); see Fang et al. (1990). Arellano-Valle and Azzalini (2006) recently presented the unified multivariate skew-elliptical distributions. Specifically, Let U and V be two random vectors of dimensions m and n, respectively, and further let        c C DT %(m+n) U , ,f . (34) ∼ EC m+n D X n V Then, the n-variate random vector X is said to have the unified multivariate skew-elliptical distribution with parameter h = f (m+n) ), where c ∈ Rm , n ∈ Rn , C ∈ Rm×m , X ∈ Rn×n (where C and X are positive definite dispersion matrices), (n, c, X, C, D,% n×m D∈R , and % f (m+n) is a density generator function, denoted by X ∼ SUEC n,m (n, c, X, C, D,% f (m+n) ), if d

X = V|U > 0. The density function of X is (see Arellano-Valle and Azzalini, 2006) fSUEC n,m (x; h) =

(m) −1 T −1 f (n) )FEC m (c + T X (x − n); C − D X D,% fw(x) ) fEC n (x; n, X,%

FEC m (c; C,% f (m) )

,

x ∈ Rn ,

(35)

(m) T −1 (x − n), fEC n (·; n, X,% f (n) ) is the pdf EC n (n, X,% f (n) ), and FEC m (·; C − D X D,% fw(x) ) and FEC m (·; C,% f (m) ) are −1 %(m) f (m) ), respectively. It is important to mention here the density function in the cdf's of EC m (0, C − D X D, fw(x) ) and EC m (0, C,% (35) reduces to the density function in (32) if D = 0 and c = 0. Here, we consider a special case of the unified skew-elliptical distribution in (35). If T,1 ,2 , is a random variable such that

where w(x) = (x − n)T X

−1 T

d

T,1 ,2 , = V|(U1 < 1 V, U2 < 2 V), ∗

where (U1 , U2 , V)T ∼ t3 (0, X , ) with ⎛ ⎞ 1  0 ∗ ⎝ X =  1 0 ⎠ , || < 1, 0 0 1

1 , 2 ∈ R, || < 1,

(36)

3812

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

we say that it has a generalized skew-t distribution with parameter (, 1 , 2 , )T , and denote it by GSt(, 1 , 2 , ). Its pdf can be shown, from the general form in (35) and properties of multivariate t distribution (see Fang et al., 1990), to be ⎞ ⎛ # #   1+ 1+ 1  ⎝ , w ; gGSt (w; , 1 , 2 , ) = c(1 , 2 , )g(w; )G2 1 w ,  + 1⎠ ,  + w2 2  + w2  1 where g(·; ) and G2 (·, ·; ( 1 1 ),  + 1) are the pdf of Student's t distribution with  degrees of freedom and cdf of t2 (0, ( 1 1 ),  + 1), respectively, and c(1 , 2 , ) is a before. The following lemma shows that GSt(, 1 , 2 , ) is a scale mixture of GSN(1 , 2 , ) discussed in Section 2. Lemma 5. If T,1 ,2 , ∼ GSt(, 1 , 2 , ), then d

T,1 ,2 , = W −1/2 Z1 ,2 , ,

(37)

where W ∼ 2 /  independently of Z1 ,2 , ∼ GSN(1 , 2 , ). Proof. By Eq. (36), we have d

T,1 ,2 , = V|(U1 < 1 V, U2 < 2 V), ∗



where (U1 , U2 , V)T ∼ t3 (0, X , ) with X as given earlier. Now, by Lemma 4, we have d

(U1 , U2 , V)T = (W −1/2 X, W −1/2 Y1 , W −1/2 Y2 )T d

= W −1/2 (X, Y1 , Y2 )T , where X ∼ N(0, 1) independently of (Y1 , Y2 )T ∼ N2 (0, 0, 1, 1, ). Therefore, d

T,1 ,2 , = W −1/2 X|(W −1/2 Y1 < 1 (W −1/2 X), W −1/2 Y2 < 2 (W −1/2 X)) d

= W −1/2 X|(Y1 < 1 X, Y2 < 2 X). Since X|(Y1 < 1 X, Y2 < 2 X) ∼ GSN(1 , 2 , ) by Eq. (4), we readily obtain the distributional equality in (37).



9. Distribution and density functions of generalized skew-tm Although we presented an expression for the density function of the generalized skew-t distribution in the last section, we present in this section integral forms for the cdf and pdf of the generalized skew-t distribution, GSt(, 1 , 2 , ). Let us denote GGSt (·; , 1 , 2 , ) and gGSt (·; , 1 , 2 , ) for the cdf and pdf of GSt(, 1 , 2 , ), respectively. In general, we have by (37), for w ∈ R, GGSt (w; , 1 , 2 , ) = Pr{T,1 ,2 , ⱕ w} = Pr{V −1/2 Z1 ,2 , ⱕ w} = Pr{Z1 ,2 , ⱕ wV 1/2 } = E[GSN (wV 1/2 ; 1 , 2 , )],

(38)

where V ∼ 2 /  and GSN (·; 1 , 2 , ) is the cdf of GSN(1 , 2 , ). From (38), upon using the pdf of 2 , we can express the cdf of GSt (1 , 2 , ) in the form of an integral as !  "/2

GGSt (w; , 1 , 2 , ) =

2

C

2 !" 2





x−1 e−x

2 /2

0

GSN (wx; 1 , 2 , ) dx,

w ∈ R.

(39)

Now, upon differentiating the expression of GGSt (w; , 1 , 2 , ) in (39) with respect to w, we readily obtain an expression for the pdf of GGSt (w; , 1 , 2 , ) as gGSt (w; , 1 , 2 , ) =

=

* GGSt (w; , 1 , 2 , ) *w !  "/2 

2

C

2 !" 2



0

x e−x

2 /2

GSN (wx; 1 , 2 , ) dx,

where GSN (·1 , 2 , ) is the pdf of GSN(1 , 2 , ).

w ∈ R,

(40)

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3813

Remark 8. In the special case when 1 = , 2 = 0,  = 0 (or 1 = 0, 2 = ,  = 0), we simply obtain the univariate skew-t distribution of Azzalini and Capitanio (2003) with one parameter , denoted by St(, ). In this case, denoting T,1 ,2 , by T, , we have from Lemma 5 d

T, = V −1/2 Z ,

 ∈ R,

where V ∼ 2 /  independently of Z ∼ SN() (Azzalini's skew-normal distribution). Further, in this case, with GSt (·, ) denoting the cdf of T, , we readily have from Eq. (38) GSt (t; , ) = E[SN (tV 1/2 ; )],

t,  ∈ R,

(41)

where SN (·; ) denotes the cdf of SN(). Let gSt (t; , ) be the density function corresponding to GSt (t; , ) in (41). Special Case 1. In the special case when  = 1, we obtain a skew-Cauchy distribution with three parameters, which is a generalization of the skew-Cauchy distribution presented by Behboodian et al. (2006). Though the cdf cannot be obtained explicitly in this case, an explicit expression for the pdf can be obtained. We have from Eq. (40) that, for w ∈ R,  ∞ x(x)GSN (wx; 1 , 2 , ) dx gGSt (w; 1, 1 , 2 , ) = 2 0  ∞ x(x)(wx)2 (1 wx, 2 wx; ) dx = 2c(1 , 2 , ) 0  ∞ c(1 , 2 , ) 2 2 = (1 + w2 )xe−(1+w )x /2 2 (1 wx, 2 wx; ) dx, (42) (1 + w2 ) 0 where, as before, 2 (·, ·; ) denotes the cdf of N2 (0, 0, 1, 1, ). But, from Lemma 2, we have ⎛







( 2 − 1 )x ⎠ ( 1 − 2 )x ⎠ * + 2 ( 2 x) ⎝  . 2 ( 1 x, 2 x; ) = 1 ( 1 x) ⎝  2 2 *x 1−

1−

(43)

Upon using the expression in (43) inside the integrand in (42) and integrating by parts, we obtain, for w ∈ R, ⎞ ⎛ ⎡  ∞ (2 − 1 )wu 1 w c(1 , 2 , ) ⎣ ⎠ du ⎝ gGSt (w; 1, 1 , 2 , ) = 2 (0, 0; ) +  (u)  (1 + w2 ) 2 2 1 + (1 + 1 )w2 0 (1 − 2 ){1 + (1 + 1 )w2 } ⎞ ⎤ ⎛  ∞ (1 − 2 )wu 2 w ⎠ du⎦ . ⎝ + (u)  2 2 1 + (1 + 2 )w2 0 (1 − 2 ){1 + (1 + 2 )w2 }

(44)

Now, using the fact that 2 (0, 0; ) = (1/2)cos−1 (−) and the formula (see, for example, Chiogna, 1998) 

∞ 0

(u)(u) du =

 1 cos−1 −

, 2 1 + 2

 ∈ R,

we readily obtain from Eq. (44) that, for w ∈ R, ⎡ ⎛ ⎞ (1 − 2 )w 1 w c(1 , 2 , ) ⎣ −1 −1 ⎝ ⎠  cos (−) +  gGSt (w; 1, 1 , 2 , ) = cos 22 (1 + w2 ) 2 2 2 1 + (1 + 1 )w2 1 − 2 + {1 − 2 + 1 + 2 − 21 2 }w2 ⎛ ⎞⎤ (2 − 1 )w 2 w −1 ⎝ ⎠⎦ .  (45) + cos 2 2 2 1 + (1 + 2 )w2 1 − 2 + {1 − 2 + 1 + 2 − 21 2 }w2 It should be noted that the particular case ⎡ ⎤  w 1 ⎣1 +  ⎦, gSt (w; 1, ) = gGSt (w; 1, , 0, 0) = gGSt (w; 1, 0, , 0) = (1 + w2 ) 2 1 + (1 +  )w2

w∈R

is the skew-Cauchy distribution discussed by Behboodian et al. (2006). Special Case 2. In the special case when  =2, we obtain a skew-t2 distribution with three parameters. Student t-distribution with 2 degrees of freedom is the simplest Student's t -distribution (see Jones, 2002) as it possesses explicit cdf and some interesting

3814

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

properties. Here, we discuss a skew-t2 distribution with three parameters and show that it too possesses a simple explicit cdf. In the case when  = 2, we have from (39) that GGSt (w; 2, 1 , 2 , ) =

 0



2

2xe−x GSN (wx; 1 , 2 , ) dx,

w ∈ R.

Integrating now by parts, we obtain 



2

e−x (wx)2 (1 wx, 2 wx; ) dx  ∞ 2 = GSN (0; 1 , 2 , ) + c(1 , 2 , )w e−x (wx; 1 , 2 , ) dx 0 & ' 1 w 2 w c(1 , 2 , )w ,

, = GSN (0; 1 , 2 , ) + 1 − GSN 0;

1 w 2 w 2 + w2 2 + w2 c

,

, 2 + w2 2 2 2+w 2+w

GGSt (w; 2, 1 , 2 , ) = GSN (0; 1 , 2 , ) + w

0

(46)

for w ∈ R, where GSN (0; h) is as given in the Appendix. An explicit expression for the corresponding pdf can also be derived readily by differentiating the expression of the cdf in (46). Remark 9. If we set 1 = , 2 = 0,  = 0 (or 1 = 0, 2 = ,  = 0) in (46), we obtain a skew-t2 distribution with one parameter (which is the univariate case of Azzalini and Capitanio, 2003) with cdf GSt (w; 2, ) = GGSt (w; 2, , 0, 0) = GGSt (w; 2, 0, , 0)   w w 1 1 1 1 −1 −1

+ tan , = − tan () +

2   2 + w2 2 2 + w2

w ∈ R.

Note that if  = 0 in the above expression, we obtain the cdf of the t2 distribution (Jones, 2002) given by GSt (w; 2, 0) = GGSt (w; 2, 0, 0, 0) =

1 w , +

2 2 2 + w2

w ∈ R.

10. Moments of generalized skew-tm distribution We can derive the moments of T,1 ,2 , ∼ GSt(, 1 , 2 , ) from Eq. (37) through the moments of Z1 ,2 , ∼ GSN(1 , 2 , ). In general, if m is an integer such that m < , we have E[Tm, , , ] = E[V −m/2 ]E[Zm , , ], 1 2 1 2

(47)

where V ∼ 2 /  and Z1 ,2 , ∼ GSN(1 , 2 , ). Since !  "m/2   − m   2 2 E[V −m/2 ] = , !"



m < ,

2

by using the expressions of moments of Z1 ,2 , derived in Section 3, we obtain, for example,   −1 ⎧ ⎫ c(1 , 2 , ) ⎨   2 ⎬ 2 1  + ,  > 1, E[T,1 ,2 , ] = !" ⎩  2 2⎭ 4 1 + 1 1 + 2 2 ⎡  ⎤ c(1 , 2 , )  1 (2 − 1 ) 2 (1 − 2 ) ⎦ 2 ⎣  1+ ,  > 2, + E[T, , , ] = 2 2 1 2 −2 2 2 1 + 1 1 + 2 2 1 − 2 + 1 + 2 − 21 2 ⎡  ⎤ c(1 , 2 , )  ⎣ 1 (2 − 1 ) 2 (1 − 2 ) ⎦ 1+  −{E[T,1 ,2 , ]}2 , >2. + Var(T,1 ,2 , )= 2 2 −2 2 2 1+  1+  2 1 2 2 1 −  +1 +2 − 21 2 $

(48)

(49)

(50)

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3815

11. Properties of generalized skew-tm distribution In this section, we present a representation theorem and unimodality property of the GSt(, 1 , 2 , ) distribution. Theorem 6 (Representation theorem). If (X1 , X2 , X3 )T ∼ t3, (0, R, ), where R is a 3 × 3 positive definite dispersion matrix of the form considered earlier in Theorem 2, then X1 | min(X2 , X3 ) > 0 ∼ GSt(, 1 , 2 , 23.1 ),

(51)

where

1 = 

12 1 − 212

,

2 = 

13

1 − 213

 −  and 23.1 =  23 12 13 1 − 212 1 − 213

(52)

as defined earlier. Proof. Since d

(X1 , X2 , X3 )T = V −1/2 (W1 , W2 , W3 )T , where (W1 , W2 , W3 )T ∼ N3 (0, R), we have d

X1 | min(X2 , X3 ) > 0 = V −1/2 W1 | min(V −1/2 W2 , V −1/2 W3 ) > 0 d

= V −1/2 W1 | min(W2 , W3 ) > 0, where V ∼ 2 / . The result in (51) now follows immediately from the fact that W1 | min(W2 , W3 ) > 0 ∼ GSN(1 , 2 , ), where 1 , 2 and  are as given in Eq. (52).



Theorem 7. The generalized skew-t density function, i.e., gGSt (w; , 1 , 2 , ), in Eq. (40) is unimodal. 12. Order statistics from bivariate-tm distribution Let (X1 , X2 )T ∼ t2 (0, R, ), where R = ( 112 2

1 2 ), and (X1:2 , X2:2 )T denote the order statistics obtained from the random 22

vector (X1 , X2 )T . Then, because of Lemma 4, we have d

X1:2 = V −1/2 W1:2

and

d

X2:2 = V −1/2 W2:2 ,

(53)

where V ∼ 2 / , (W1 , W2 )T ∼ N2 (0, R), and (W1:2 , W2:2 )T are the order statistics corresponding to the random vector (W1 , W2 )T . Theorem 8. The cdf of X2:2 is the mixture     ( 1 t t ; , 1 + GSt ; , 2 GSt H(2) (t; R, ) = , 2 1 1

t ∈ R,

(54)

where GSt (·; , ) denotes the cdf of T, ∼ St(, ), ( /  ) −  1 = 1 2 1 − 2

and 2 =

( 2 /  1 ) − 

. 1 − 2

(55)

Proof. Consider H(2) (t; R, ) = Pr{X2:2 ⱕ t} = Pr{V −1/2 W2:2 ⱕ t} = Pr{W2:2 ⱕ tV 1/2 } = E[F(2) (tV 1/2 ; R)],

t ∈ R,

where F(2) (·; R) denotes the cdf of W2:2 . But, it is known that F(2) (s; R) =

    ( 1 s s SN ; 1 + SN ; 2 , 2 1 2

s ∈ R;

(56)

3816

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

see Behboodian et al. (2006). Upon using this expression in Eq. (56), we obtain H(2) (t; R, ) = E[F(2) (tV 1/2 ; R)]  & ' & ' 1 tV 1/2 tV 1/2 = ; 1 ; 2 E SN + E SN 2 1 2     ( 1 t t ; , 1 + GSt ; ,  2 GSt = , t ∈ R, 2 1 2 where the last equality follows from the result in Eq. (41). Hence, the theorem.



Remark 10. The cdf of X1:2 can be readily obtained from Theorem 8 by using the fact that d

X1:2 = −X2:2 . Remark 11. Upon differentiating the expression of H(2) (t; R, ) in Eq. (54) with respect to t, we obtain the pdf of X2:2 as h(2) (t; R, ) =



1 gSt 2 1

   1 t ; ,  1 + gSt ; ,  2 , 1 2 2 2 t

t ∈ R,

(57)

where gSt (·; , ) denotes the pdf of T, ∼ St(, ). Remark 12. When 1 = 2 = 1, we simply have

X1:2

#

1− ∼ St , − 1+

#

and

X2:2 ∼ St ,

1− 1+

.

(58)

We can also easily obtain the moments of X1:2 and X2:2 as m m ] = E[V −m/2 ]E[Wi:2 ], E[Xi:2

i = 1, 2,

where E[V −m/2 ] is as given earlier. In the special case when 1 = 2 = 1, we get from Eq. (58) that

(1 − )

E[X2:2 ] = −E[X1:2 ] = 2 2 ] = E[X1:2 ]= E[X2:2





−2

,

2



−1

2 !"

 ,

 > 1,

(59)

2

 > 2.

(60)

13. Order statistics from trivariate tm -distribution In this section, we discuss the distributions of order statistics from the trivariate t -distribution in terms of generalized skew-t distributions. For this purpose, let us assume that (X1 , X2 , X3 )T ∼ t3 (0, R, ), where R is a 3 × 3 dispersion matrix that is positive definite and of the general form considered earlier in Section 6. Let X1:3 = min(X1 , X2 , X3 ) < X2:3 < X3:3 = max(X1 , X2 , X3 ) denote the order statistics from (X1 , X2 , X3 )T , and H(i) (t; R, ) denotes the cdf of Xi:3 for i = 1, 2, 3. d

so

From Lemma 4, we have (X1 , X2 , X3 )T = V −1/2 (W1 , W2 , W3 )T , where V ∼ 2 /  independently of (W1 , W2 , W3 )T ∼ N3 (0, R), and d

Xi:3 = V −1/2 Wi:3 ,

i = 1, 2, 3,

(61)

where Wi:3 (i = 1, 2, 3) are the order statistics from (W1 , W2 , W3 )T . In the following theorem, we derive an explicit expression for H(i) (t; R, ), i = 1, 2, 3, using the cdf of Wi:3 denoted by F(i) (t; R) for i = 1, 2, 3. Theorem 9. The cdf of X3:3 is the mixture H(3) (t; R) = a(h1 )GGSt



t

1

     t t ; , h1 + a(h2 )GGSt ; , h2 + a(h3 )GGSt ; , h 3 ,

2

3

t ∈ R,

where GGSt (·; , h) denotes the cdf of GSt(, h), a(h) is as given in Eq. (5), and h1 , h2 and h3 are same as defined earlier in Eq. (25).

(62)

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3817

Proof. From Eq. (61), we have H(3) (t; R, ) = Pr{X3:3 ⱕ t} = Pr{V −1/2 W3:3 ⱕ t} = Pr{W3:3 ⱕ tV 1/2 } = E[F(3) (tV 1/2 ; R)],

t ∈ R.

(63)

But, from Theorem 3, it is known that       s s s F(3) (s; R) = a(h1 )GSN ; h1 + a(h2 )GSN ; h2 + a(h3 )GSN ; h3 ,

1

2

3

s ∈ R.

(64)

Upon using this expression in Eq. (63), we obtain & & & ' ' ' tV 1/2 tV 1/2 tV 1/2 ; h1 ; h2 ; h3 H(3) (t; R, ) = a(h1 )E GSN + a(h2 )E GSN + a(h3 )E GSN ,

1

2

3

t ∈ R.

(65)

The expression in Eq. (62) follows readily from (65) upon using the relation in (38), which completes the proof of the theorem.

 Theorem 10. The cdf of X2:3 is the mixture       t t t H(2) (t; R, ) = a(d1 )GGSt ; , d1 + a(d2 )GGSt ; , d2 + a(d3 )GGSt ; , d 3 1 1 2       t t t + a(d4 )GGSt ; , d4 + a(d5 )GGSt ; , d5 + a(d6 )GGSt ; , d 6 ,

2

3

3

t ∈ R,

(66)

where GGSt (·; , d) denotes the cdf of GSt(, d), a(d) is as given in (7), and di (i = 1, 2, · · · , 6) are same as defined earlier in Eq. (31). Proof. The proof is along the same lines as those of Theorem 9, but based on the expression       s s s F(2) (s; R) = a(d1 )GSN ; d1 + a(d2 )GSN ; d2 + a(d3 )GSN ; d3 1 1 2       s s s + a(d4 )GSN ; d4 + a(d5 )GSN ; d5 + a(d6 )GSN ; d6 , s ∈ R,

2

derived earlier in Theorem 4.

3

3

(67)



Remark 13. The distribution function of X1:3 , viz., H(1) (t; R, ), can be easily obtained from Theorem 9 because of the relation d

X1:3 = −X3:3 . Remark 14. The pdf of Xi:3 can be obtained readily by differentiating the expression of the cdf of Xi:3 with respect to t. For example, from the cdf of X3:3 presented in Theorem 9, we immediately obtain the pdf of X3:3 as       a(h1 ) t a(h2 ) t a(h3 ) t h(3) (t; R, ) = gGSt ; , h 1 + gGSt ; , h2 + gGSt ; , h3 , t ∈ R. (68)

1

1

2

2

3

3

Remark 15. In the special case when 1 = 2 = 3 = 1 and 12 = 13 = 23 = ∗ (− 12 < ∗ < 1), we have # # 1 − ∗ 1 − ∗ ∗ X1:3 ∼ GSt , − , − , , 1 + ∗ 1 + ∗ 1 + ∗ # # 1 − ∗ 1 − ∗ −∗ X2:3 ∼ GSt , , − , , 1 + ∗ 1 + ∗ 1 + ∗ # # 1 − ∗ 1 − ∗ ∗ , , X3:3 ∼ GSt , . 1 + ∗ 1 + ∗ 1 + ∗ The above distributional results, when combined with Theorem 7, readily imply that the order statistics from a trivariate exchangeable t-distribution are unimodal. We can also easily derive the moments of order statistics from the trivariate t -distribution from Eq. (61) and the expressions m ] derived earlier in Section 7. In general, for any integer m < , we have of E[Wi:3 m m ] = E[V −m/2 ]E[Wi:3 ], E[Xi:3

i = 1, 2, 3, m < .

3818

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

In the special case when 1 = 2 = 3 = 1, for example, we obtain the following expressions:   √ −1 C    2 E[X3:3 ] = −E[X1:3 ] = √ !  " { 1 − 12 + 1 − 13 + 1 − 23 },  > 1, 2 2C 2 E[X2:3 ] = 0,  > 1;  √   A 2 2 E[X3:3 ] = E[X1:3 ]= 1+ ,  > 2, −2 2  √   A 2 E[X2:3 ]= 1− ,  > 2, −2   √  A  Var (X3:3 ) = Var(X1:3 ) = 1+ − E2 [X3:3 ],  > 2, −2 2  √   A Var(X2:3 ) = 1− ,  > 2, −2 

(69)

(70)

(71)

where A = 6 − {(1 + 12 )2 + (1 + 13 )2 + (1 + 23 )2 } + 2(12 13 + 12 23 + 13 23 ).

(72)

In particular, when 12 = 13 = 23 = ∗ (− 12 < ∗ < 1), we deduce from Eqs. (69) and (71) the following expressions: √

 3C

E[X3:3 ] = −E[X1:3 ] = E[X2:3 ] = 0,





−1

22C

 > 1,



2 1 − ∗ , !"

 > 1,

2

! "⎤ ⎫ ⎡ √ ⎬ 9C2 −1 1 ⎣ 3 1 2 ∗ ⎦ (1 −  ) , Var(X3:3 ) = Var(X1:3 ) =  + − 2)* ⎭ ⎩ − 2 2  − 2 4C 2   √  3 (1 − ∗ ) ,  > 2. 1− Var(X2:3 ) = −2  ⎧ ⎨

(73)

 > 2, (74)

Acknowledgments The authors thank the Natural Sciences and Engineering Research Council of Canada for supporting this research. The authors also express their sincere thanks to Professors M. Chris Jones (The Open University, Milton Keynes, UK), Herbert A. David (Iowa State University, Ames, Iowa), and two anonymous referees for their suggestions and constructive comments which led to a considerable improvement in the content as well as presentation of this manuscript. Appendix Result. We have

GSN (0; 1 , 2 , ) =

1 c( 1 , 2 , ) {tan−1 ( 1 ) + tan−1 ( 2 )}, − 2 4

where GSN (·; 1 , 2 , ) denotes the cdf of GSN( 1 , 2 , ) and c(·, ·, ) is as given in (8). Proof. Consider

GSN (0; 1 , 2 , ) =



0 −∞

GSN (z; 1 , 2 , ) dz

= c( 1 , 2 , )



0

−∞

(z)2 ( 1 z, 2 z; ) dz

= c( 1 , 2 , ) Pr{Y1 < 0, Y2 < 1 Y1 , Y3 < 2 Y1 }, where Y1 ∼ N(0, 1) independently of (Y2 , Y3 )T ∼ N2 (0, 0, 1, 1, ). Thus,

GSN (0; 1 , 2 , ) = c( 1 , 2 , ) Pr{Y1 < 0, Y2 − 1 Y1 < 0, Y3 − 2 Y1 < 0}.

A. Jamalizadeh, N. Balakrishnan / Journal of Statistical Planning and Inference 139 (2009) 3799 -- 3819

3819

  2 2 Now, upon using the orthant probability for (Y1 , (Y2 − 1 Y1 )/ 1 + 1 , (Y3 − 2 Y1 )/ 1 + 2 )T (see, for example, Kotz et al., 2000) and some trigonometric relations, the required result is obtained.  References Adcock, C.J., 2007. Extensions of Stein's lemma for the skew-normal distribution. Comm. Statist. Theory Methods 36, 1661–1671. Arellano-Valle, R.B., Azzalini, A., 2006. On the unification of families of skew-normal distributions. Scand. J. Statist. 33, 561–574. Arellano-Valle, R.B., Genton, M.G., 2005. On fundamental skew distribution. J. Multivariate Anal. 96, 93–116. Arellano-Valle, R.B., Genton, M.G., 2007. On the exact distribution of linear combinations of order statistics from dependent random variables. J. Multivariate Anal. 98, 1876–1894. Arellano-Valle, R.B., Genton, M.G., 2008. On the exact distribution of the maximum of absolutely dependent random variables. Statist. Probab. Lett. 78, 27–35. Arellano-Valle, R.B., Gomez, H.W., Quintana, F.A., 2004. A new class of skew-normal distribution. Comm. Statist. Theory Methods 33, 1465–1480. Arnold, B.C., Beaver, R.J., 2002. Skew multivariate models related to hidden truncation and/or selective reporting. Test 11, 7–54. Azzalini, A., 1985. A class of distributions which includes the normal ones. Scand. Actuar. J. 12, 171–178. Azzalini, A., 1986. Further results on a class of distributions which includes the normal ones. Statistica 46, 199–208. Azzalini, A., 2005. The skew-normal distribution and related multivariate families. Scand. J. Statist. 32, 159–188. Azzalini, A., Capitanio, A., 2003. Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t distribution. J. Roy. Statist. Soc. Ser. B 65, 367–389. Azzalini, A., Chiogna, M., 2004. Some results on the stress-strength model for skew normal variate. Metron LXII, 315–326. Azzalini, A., Dalla Valle, A., 1996. The multivariate skew-normal distribution. Biometrika 83, 715–726. Balakrishnan, N., 1993. Multivariate normal distribution and multivariate order statistics induced by ordering linear combinations. Statist. Probab. Lett. 17, 343–350. Balakrishnan, N., 2002. Discussion on “Skew multivariate models related to hidden truncation and/or selective reporting” by B.C. Arnold and R.J. Beaver. Test 11, 37–39. Basu, A.P., Ghosh, J.K., 1978. Identifiability of the multinormal and other distributions under competing risks model. J. Multivariate Anal. 8, 413–429. Behboodian, J., Jamalizadeh, A., Balakrishnan, N., 2006. A new class of skew-Cauchy distributions. Statist. Probab. Lett. 76, 1488–1493. Branco, M., Dey, D.K., 2001. A general class of multivariate elliptical distribution. J. Multivariate Anal. 79, 99–113. Cain, M., 1994. The moment-generating function of the minimum of bivariate normal random variables. Amer. Statist. 48, 124–125. Cain, M., Pan, E., 1995. Moments of the minimum of bivariate normal random variables. Math. Sci. 20, 119–122. Cambanis, S., Huang, S., Simons, G., 1981. On the theory of elliptically contoured distributions. J. Multivariate Anal. 11, 368–385. Chiogna, M., 1998. Some results on the scalar skew-normal distribution. J. Italian Statist. Soc. 7, 1–13. David, H.A., Joshi, P.C., 1968. Recurrence relations between moments of order statistics for exchangeable variates. Ann. Math. Statist. 39, 272–274. David, H.A., Nagaraja, H.N., 2003. Order Statistics. third ed. Wiley, Hoboken, NJ. Fang, K.T., Kotz, S., Ng, K.W., 1990. Symmetric Multivariate and Related Distributions. Chapman & Hall, London. Gonzalez-Farias, G., Dominguez-Molina, A., Gupta, A.K., 2004. Additive properties of skew-normal vectors. J. Statist. Plann. Inference 126, 521–534. Gupta, S.S., 1963. Probability integrals of multivariate normal and multivariate t. Ann. Math. Statist. 34, 792–828. Gupta, S.S., Nagel, K., Panchapakesan, S., 1973. On the order statistics from equally correlated normal random variables. Biometrika 60, 403–413. Gupta, S.S., Pillai, K.C.S., 1965. On linear functions of ordered correlated normal random variables. Biometrika 52, 367–379. Henze, N.A., 1986. A probabilistic representation of the skew-normal distribution. Scand. J. Statist. 13, 271–275. Huang, J.S., Ghosh, M., 1982. A note on strong unimodality of order statistics. J. Amer. Statist. Assoc. 77, 929–930. Jamalizadeh, A., Balakrishnan, N., 2008. On order statistics from bivariate skew-normal and skew-t distributions. J. Statist. Plann. Inference 138, 4187–4197. Jamalizadeh, A., Khosravi, M., Balakrishnan, N., 2009. Recurrence relations for distributions of skew-t and a linear combination of order statistics from a bivariate-t. Comput. Statist. Data Anal. 53, 847–852. Jones, M.C., 2002. Student's simplest distribution. J. Roy. Statist. Soc. Ser. D 51, 41–49. Karlin, S., 1968. Total Positivity, vol. 1. Stanford University Press, Stanford. Kotz, S., Balakrishnan, N., Johnson, N.L., 2000. Continuous Multivariate Distributions, vol. 1, second ed. Wiley, New York. Lien, D.-H.D., 1986. Moments of ordered bivariate log-normal distributions. Econom. Lett. 20, 45–47. Liseo, B., Loperfido, B., 2003. A Bayesian interpretation of the multivariate skew-normal distribution. Statist. Probab. Lett. 61, 396–401. Loperfido, N., 2001. Quadratic forms of skew-normal random vectors. Statist. Probab. Lett. 54, 381–387. Nagaraja, H.N., 1982. A note on linear functions of ordered correlated normal random variables. Biometrika 69, 284–285. Young, D.H., 1967. Recurrence relations between the P.D.F.'s of order statistics of dependent variables, and some applications. Biometrika 54, 283–292.