Some positive dependence stochastic orders

Some positive dependence stochastic orders

Journal of Multivariate Analysis 97 (2006) 46 – 78 www.elsevier.com/locate/jmva Regular article Some positive dependence stochastic orders Antonio C...

415KB Sizes 0 Downloads 62 Views

Journal of Multivariate Analysis 97 (2006) 46 – 78 www.elsevier.com/locate/jmva

Regular article

Some positive dependence stochastic orders Antonio Colangeloa,1 , Marco Scarsinib , Moshe Shakedc,∗ a Istituto di Metodi Quantitativi, Università Commerciale “L. Bocconi”, Viale Isonzo 25, 20136 Milano, Italy b Dipartimento di Statistica e Matematica Applicata, Università di Torino, Piazza Arbarello 8,

10122 Torino, Italy c Department of Mathematics, University of Arizona, Tucson, Arizona 85721, USA

Received 1 March 2004 Available online 19 January 2005

Abstract In this paper we study some stochastic orders of positive dependence that arise when the underlying random vectors are ordered with respect to some multivariate hazard rate stochastic orders, and have the same univariate marginal distributions. We show how the orders can be studied by restricting them to copulae, we give a number of examples, and we study some positive dependence concepts that arise from the new positive dependence orders. We also discuss the relationship of the new orders to other positive dependence orders that have appeared in the literature. © 2004 Elsevier Inc. All rights reserved. AMS 1991 subject classification: 60E15 Keywords: Copula; Fréchet classes and bounds; Marshall–Olkin distributions; Farlie–Gumbel–Morgenstern distributions; Archimedean copula; Left tail decreasing (LTD); Right tail increasing (RTI); Multivariate total positivity; Likelihood ratio order; Lower orthant decreasing ratio (lodr) order; Upper orthant increasing ratio (uoir) order

∗ Corresponding author. Fax: +1 520 621 8322.

E-mail address: [email protected] (M. Shaked). 1 This work was done while this author was visiting the Department of Mathematics at the University of Arizona

during the academic year 2003–2004. The kind hospitality received is gratefully acknowledged. 0047-259X/$ - see front matter © 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.jmva.2004.11.006

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

47

1. Introduction Some stochastic orders, that compare the “size” or “location” or “magnitude” of two random vectors, may be thought of as stochastic orders of positive dependence if the compared random vectors have the same univariate marginal distributions. For example, when this is the situation, in the bivariate case, the lower and the upper orthant orders  lo and  uo (see, for example, [26, Section 4.G.1] or [22, p. 90] become the positive quadrant dependence (PQD) order  PQD (see, for example, [16]); in the bivariate case the order  PQD is equivalent to the supermodular order (see, for example, [29] or [27]). On the other hand, some multivariate location orders do not give anything meaningful once the marginals are held fixed. For instance, the usual multivariate stochastic order  st (see, for example, [26, Section 4.B]) can order two distributions with the same marginals only if these distributions are equal (see Proposition 6 in [24] or Theorem 4 in [2]). A similar problem was faced by Dall’Aglio and Scarsini [9], who saw that the lift-zonoid order can order distributions with the same marginals only if they are equal, and therefore they used the weaker zonoid order as an order of linear dependence. In this paper we study some stochastic orders of positive dependence that arise when the underlying random vectors are ordered with respect to some multivariate hazard rate stochastic orders, and have the same univariate marginal distributions. A pair of such orders is studied in Section 2. After giving the definitions and some basic properties, we show how the orders can be studied by restricting them to copulae. We then give a number of examples, and study some positive dependence concepts that arise from the new positive dependence orders. We close Section 2 with a discussion on the relationship of the new orders to other positive dependence orders that have appeared in the literature. A pair of stronger positive dependence orders is introduced and studied in Section 3. Finally, in Section 4, we show that another related idea, that could have perhaps led to even stronger orders, is actually too strong, since, at least in the bivariate case, it can “compare” only identically distributed random vectors. Some conventions that are used in this paper are the following. By ‘increasing’ and ‘decreasing’ we mean ‘nondecreasing’ and ‘nonincreasing,’ respectively. For any two ndimensional vectors x = (x1 , x2 , . . . , xn ) and y = (y1 , y2 , . . . , yn ), the notation x y means xi yi , i = 1, 2, . . . , n. The minimum and the maximum operators are denoted by ∧ and ∨; furthermore, we use the notation (x1 , x2 , . . . , xn ) ∧ (y1 , y2 , . . . , yn ) = (x1 ∧ y1 , x2 ∧ y2 , . . . , xn ∧ yn ), and (x1 , x2 , . . . , xn ) ∨ (y1 , y2 , . . . , yn ) = (x1 ∨ y1 , x2 ∨ y2 , . . . , xn ∨ yn ). The notation =st stands for equality in law. For any random vector (or variable) X, and  an event A, we denote by [XA] a random vector (or variable) whose distribution is the conditional distribution of X given A. We recall (see, for example, [26] or [22]) that a random vector X is said to be smaller than a random vector Y, in the upper orthant order (denoted as X uo Y), if P {X > x}P {Y > x} for all x. A random vector X is said to be smaller than a random vector Y, in the lower orthant order (denoted as X lo Y), if P {X x}P {Y x} for all x. We denote by n (F1 , F2 , . . . , Fn ) the class of all the n-dimensional distributions with the univariate marginals F1 , F2 , . . . , Fn , n 2. In the literature the class n (F1 , F2 , . . . , Fn ) is called a Fréchet class. Define F + (x) = min{F1 (x1 ), F2 (x2 ), . . . , Fn (xn )} and

48

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

n

F − (x) = max{

i=1 Fi (xi ) − n + 1, 0}.

Then for all F ∈ n (F1 , F2 , . . . , Fn ) we have

F − (x)F (x) F + (x) for all x. The bounds F − and F + are pointwise sharp. They are called, respectively, the Fréchet lower and upper bounds. Furthermore, F + ∈ n (F1 , F2 , . . . , Fn ) for all n 2, and F − ∈ 2 (F1 , F2 ); that is, when n = 2. 2. The orthant ratio orders 2.1. Definitions and basic properties Let X = (X1 , X2 , . . . , Xn ) and Y = (Y1 , Y2 , . . . , Yn ) be two random vectors with respective distribution functions F and G, and with survival functions F and G defined by F (x) = P {X > x} and G(x) = P {Y > x}, x ∈ Rn . We suppose, unless stated otherwise, that F and G belong to the same Fréchet class. We say that X is smaller than Y in the lower orthant decreasing ratio order (denoted by X lodr Y or F  lodr G) if F (y)G(x) F (x)G(y)

whenever xy.

(2.1)

This is equivalent to G(x) F (x)

is decreasing in x ∈ {x : G(x) > 0},

(2.2)

where in (2.2) we use the convention a/0 ≡ ∞ whenever a > 0. Note that (2.2) can be written equivalently as F (x − u) G(x − u)  , F (x) G(x)

u0, x ∈ {x : F (x) > 0} ∩ {x : G(x) > 0}

and it is also equivalent to   [X − xXx]  lo [Y − xY x],

x ∈ {x : F (x) > 0} ∩ {x : G(x) > 0}.

(2.3)

(2.4)

Note that from (2.2) it follows that {x : F (x) > 0} ⊆ {x : G(x) > 0}. Thus, in (2.3) and (2.4) we can formally replace the expression {x : F (x) > 0} ∩ {x : G(x) > 0} by the simpler expression {x : F (x) > 0}. We say that X is smaller than Y in the upper orthant increasing ratio order (denoted by X uoir Y or F  uoir G) if F (y)G(x)F (x)G(y)

whenever xy.

(2.5)

This is equivalent to G(x) F (x)

is increasing in x ∈ {x : G(x) > 0},

(2.6)

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

49

where here, again, we use the convention a/0 ≡ ∞ whenever a > 0. Note that the above can be written equivalently as F (x + u) F (x)



G(x + u) G(x)

,

u0, x ∈ {x : F (x) > 0} ∩ {x : G(x) > 0}

(2.7)

and it is also equivalent to   [X − xX > x]  uo [Y − xY > x],

x ∈ {x : F (x) > 0} ∩ {x : G(x) > 0}.

(2.8)

Formally the expression {x : F (x) > 0} ∩ {x : G(x) > 0} in (2.7) and (2.8) can be replaced by the simpler expression {x : F (x) > 0}. Condition (2.8), when the univariate marginal distributions of X and of Y are the same, was studied in Proposition 7.2 of Denuit et al. [10]. Hu et al. [15] studied the order defined by requiring (2.6) to hold, but without the requirement that F and G belong to the same Fréchet class. They called it the weak hazard rate order (denoted by  whr ). An order that is more general than  whr is mentioned in Collet et al. [8]. Some results from Hu et al. [13] directly apply to the order  uoir , and will be used in the sequel. The two orders  lodr and  uoir are closely related, as is indicated in the next result. The proof of the next result is straightforward, and is therefore omitted. Theorem 2.1. Let X = (X1 , X2 , . . . , Xn ) and Y = (Y1 , Y2 , . . . , Yn ) be two random vectors in the same Fréchet class. (a) If X  lodr Y then (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  uoir (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for any decreasing functions 1 , 2 , . . . , n . Conversely, if (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  uoir (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for some strictly decreasing functions 1 , 2 , . . . , n , then X lodr Y. (b) If X uoir Y then (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  lodr (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for any decreasing functions 1 , 2 , . . . , n . Conversely, if (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  lodr (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for some strictly decreasing functions 1 , 2 , . . . , n , then X uoir Y. The next result is similar to Theorem 2.1, but it involves increasing, rather than decreasing, functions. Theorem 2.2. Let X = (X1 , X2 , . . . , Xn ) and Y = (Y1 , Y2 , . . . , Yn ) be two random vectors in the same Fréchet class. (a) If X  lodr Y then (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  lodr (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for any increasing functions 1 , 2 , . . . , n . Conversely, if (1 (X1 ), 2 (X2 ), . . . , n (Xn )) lodr (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for some strictly increasing functions 1 , 2 , . . . , n , then X  lodr Y. (b) If X uoir Y then (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  uoir (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for any increasing functions 1 , 2 , . . . , n . Conversely, if (1 (X1 ), 2 (X2 ),

50

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

. . . , n (Xn )) uoir (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for some strictly increasing functions 1 , 2 , . . . , n , then X uoir Y. Recall that a bivariate random vector (X1 , X2 ) is said to be smaller than a random vector (Y1 , Y2 ), that has the same univariate marginal distributions as (X1 , X2 ), in the PQD order (denoted as (X1 , X2 )  PQD (Y1 , Y2 )), if (X1 , X2 )  uo (Y1 , Y2 ). Joe [14] postulated a few desirable properties that a multivariate positive dependence order should satisfy. Here is a slight variation of the postulate list in Joe [14, p. 39]. P1. P2. P3. P4. P5. P6. P7. P8.

It should be a preorder (reflexive and transitive). It should be antisymmetric. It should imply the PQD order of every (corresponding) bivariate marginal distributions. It should be closed under marginalization. It should be closed under limits in distribution. It should be closed under permutations of the components. It should be closed under componentwise strictly increasing transformations. It should be maximal at the upper Fréchet bound, and in the bivariate case it should be minimal at the lower Fréchet bound.

The orders  lodr and  uoir satisfy most of these postulates. Postulates P1, P2, and P6 are easy to establish. For the order  uoir , Postulates P4 and P5 follow from results of Hu et al. [13]; this fact, together with Theorem 2.1, also imply that the order  lodr satisfies Postulates P4 and P5. Postulate P7 (for both orders) follows from Theorem 2.2. In order to see that P3 holds for the order  lodr , let y → ∞ in (2.1) to obtain X  lo Y. This implies the PQD ordering of every (corresponding) bivariate marginal distributions because X and Y have the same univariate marginal distributions. That is, for n-dimensional random vectors X and Y, we have X  lodr Y ⇒ (Xi , Xj )  PQD (Yi , Yj ), 1 i < j n.

(2.9)

In a similar manner it can be shown that X uoir Y ⇒ (Xi , Xj )  PQD (Yi , Yj ), 1 i < j n;

(2.10)

that is, the order  uoir also satisfies Postulate P3. Postulate P8 partly fails—the upper Fréchet bound does not always dominate every distribution, in the same Fréchet class, with respect to the order  lodr , or with respect to the order  uoir (this will be shown in Example 2.4 below). However, in the bivariate case, the lower Fréchet bound is dominated, with respect to the order  lodr , and with respect to the order  uoir , by any distribution in the same Fréchet class. This is shown in the next proposition. Recall that the Fréchet lower bound F − in the class 2 (F1 , F2 ) is defined by F − (x1 , x2 ) = max{F1 (x1 ) + F2 (x2 ) − 1, 0},

(x1 , x2 ) ∈ R2 .

A simple computation shows that the corresponding survival function F − is given by   F − (x1 , x2 ) = max F 1 (x1 ) + F 2 (x2 ) − 1, 0 , (x1 , x2 ) ∈ R2 , where F 1 ≡ 1 − F1 and F 2 ≡ 1 − F2 .

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

51

Proposition 2.3. If F ∈ 2 (F1 , F2 ) then F −  lodr F

and

F −  uoir F.

Proof. We will give the proof only for the order  uoir ; the proof for the order  lodr is similar. Fix (x1 , x2 ) (y1 , y2 ). We want to show that   max F 1 (y1 ) + F 2 (y2 ) − 1, 0 F (x1 , x2 )    max F 1 (x1 ) + F 2 (x2 ) − 1, 0 F (y1 , y2 ).

(2.11)

If F 1 (y1 ) + F 2 (y2 ) − 10 then (2.11) is trivially true. If F 1 (y1 ) + F 2 (y2 ) − 1 > 0 then (2.11) becomes    1 − F 1 (y1 ) − F 2 (y2 ) F (x1 , x2 )  1 − F 1 (x1 ) − F 2 (x2 ) F (y1 , y2 ).



(2.12)

Note that 1 − F 1 (z1 ) − F 2 (z2 ) = F (z1 , z2 ) − F (z1 , z2 ) for any (z1 , z2 ). Plugging this in (2.12) and simplifying, it is seen that (2.12) is equivalent to F (y1 , y2 )F (x1 , x2 ) F (x1 , x2 )F (y1 , y2 ), which is trivially true.



In the next example it is shown that if F ∈ 2 (F1 , F2 ) then it does not necessarily follow that F  lodr F + or that F  uoir F + , where F + is the upper Fréchet bound defined by F + (x1 , x2 ) = min{F1 (x1 ), F2 (x2 )}, (x1 , x2 ) ∈ R2 . A simple computation shows that   the corresponding survival function F + is given by F + (x1 , x2 ) = min F 1 (x1 ), F 2 (x2 ) , (x1 , x2 ) ∈ R2 . Example 2.4. Let (X1 , X2 ) and (Y1 , Y2 ) have distribution functions F and G. If (X1 , X2 ) and (Y1 , Y2 ) are bounded from below, and if for some (x1 , x2 ) (y1 , y2 ) we have F (y1 , y2 ) = G(y1 , y2 ) > 0 and F (x1 , x2 ) = G(x1 , x2 ), then (X1 , X2 ) and (Y1 , Y2 ) are not comparable with respect to the order  uoir . This follows from (2.6). Similarly, if (X1 , X2 ) and (Y1 , Y2 ) are bounded from above, and if for some (x1 , x2 )  (y1 , y2 ) we have F (y1 , y2 ) = G(y1 , y2 ) > 0 and F (x1 , x2 ) = G(x1 , x2 ), then (X1 , X2 ) and (Y1 , Y2 ) are not comparable with respect to the order  lodr . This follows from (2.2). In particular, let (X1 , X2 ), with distribution function F, take on the values (1, 1), (1, 2), (2, 1), (3, 3) with probabilities 1/5, 1/5, 1/5, and 2/5. Then F + (1, 1) = F (1, 1) and F + (2, 2) = F (2, 2), so F + and F are not comparable with respect to the order  uoir . Taking here G to be the distribution function of (−X1 , −X2 ), it can be verified that G+ and G are not comparable with respect to the order  lodr . We close this subsection with an example that shows that the orders  lodr and  uoir are not closed under convolutions.

52

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

Example 2.5. Let (X1 , X2 ), (Y1 , Y2 ), and (Z1 , Z2 ) be random vectors with probability mass functions

Suppose that (Z1 , Z2 ) is independent of (X1 , X2 ) and of (Y1 , Y2 ). It is easy to see that (X1 , X2 ) uoir (Y1 , Y2 ), and, obviously, (Z1 , Z2 )  uoir (Z1 , Z2 ). The probability mass functions of (U1 , U2 ) = (X1 , X2 ) + (Z1 , Z2 ) and (V1 , V2 ) = (Y1 , Y2 ) + (Z1 , Z2 ) are

and

and it is easily seen that (U1 , U2 )uoir (V1 , V2 ). From Theorem 2.1, with 1 (x) = 2 (x) = −x, it also follows that the order  lodr is not closed under convolutions. 2.2. Representation by copulae Let X = (X1 , X2 , . . . , Xn ) be a random vector with distribution function F. By Sklar [28] Theorem, given F ∈ n (F1 , F2 , . . . , Fn ), there exists a function CF : [0, 1]n → [0, 1] such that for all x ∈ Rn we have F (x1 , x2 , . . . , xn ) = CF (F1 (x1 ), F2 (x2 ), . . . , Fn (xn )).

(2.13)

If F is continuous then CF is unique and it can be constructed as follows: CF (u1 , u2 , . . . , un ) ≡ F (F1−1 (u1 ), F2−1 (u2 ), . . . , Fn−1 (un )), ui ∈ [0, 1], i = 1, 2, . . . , n, where Fi−1 is the generalized inverse of Fi . The function CF is called the copula associated with F; it is a distribution function with uniform[0, 1] marginals. Roughly speaking, a copula contains in it the dependence properties of the corresponding multivariate distribution function, independently of the marginal distributions. The following result shows that establishing the order  lodr or the order  uoir between two distribution functions, in the same Fréchet class, is equivalent to establishing this order between the corresponding copulae; this is a desirable property that any positive dependence order should satisfy. Theorem 2.6. Let X and Y have, respectively, distribution functions F and G in n (F1 , F2 , . . . , Fn ). Then X  lodr Y [X uoir Y] if, and only if, there exist copulae CF and CG (as in (2.13)) such that CF  lodr CG [CF  uoir CG ]. Proof. We give only the proof for the order  lodr ; the proof for the other order is similar.

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

53

Let Ran(Fi ) denote the range of Fi , i = 1, 2, . . . , n. If F and G are in n (F1 , F2 , . . . , Fn ) then their copulae are uniquely defined on ×ni=1 Ran(Fi ) ⊆ [0, 1]n . On this region we have CF (u1 , u2 , . . . , un ) = F (F1−1 (u1 ), F2−1 (u2 ), . . . , Fn−1 (un )).

(2.14)

For i = 1, 2, . . . , n the function Fi−1 is strictly increasing on Ran(Fi ). Therefore, by Theorem 2.2, CG /CF is decreasing on ×ni=1 Ran(Fi ). Now, the subcopulae CF and CG can be extended to the whole hypercube [0, 1]n by following the method illustrated in Schweizer and Sklar [25], namely, considering linear interpolations along each variable. Now assume that CF and CG are defined by (2.14) in the points xsi ≡ (x1 , . . . , xi−1 , si , xi+1 , . . . , xn ) and in xti ≡ (x1 , . . . , xi−1 , ti , xi+1 , . . . , xn ), with si < ti , but not in any point in between. By linear interpolation, we have that on the line segment between the above two points we will have to consider the ratio CG (xsi ) + (1 − )CG (xti ) . CF (xsi ) + (1 − )CF (xti ) It is a matter of calculus to show that the above ratio is increasing in , and therefore the ratio CG /CF is decreasing on the whole [0, 1]n . Conversely, suppose that there exist copulae CF and CG such that CF  lodr CG . Then from (2.2) we have that CG (u) CF (u)

is decreasing in u ∈ {u : CG (u) > 0}.

Substitute ui = Fi (xi ), i = 1, 2, . . . , n, in (2.15) to obtain (2.2).

(2.15) 

A useful corollary of Theorem 2.1, that will be used frequently in the sequel, is the following. Corollary 2.7. Let U and V be two random vectors whose distribution functions are copulae. Then U  lodr V ⇐⇒ 1 − U  uoir 1 − V, U  uoir V ⇐⇒ 1 − U  lodr 1 − V. If the distribution function of U is the copula C, then the distribution function of 1 − U is also a copula; the latter (at least in the bivariate case) is called the survival copula that corresponds to C (see [23, p. 28]). Of course, the result in Corollary 2.7 is valid even if the distribution functions of U and V are not copulae. 2.3. Examples Some examples of distributions that are ordered with respect to the orders  lodr and  uoir will now be given.

54

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

Example 2.8. Let F ⊥ and F + denote, respectively, the distribution functions corresponding to the independence case, and to the upper Fréchet bound, in the class  n (F1 , F2 , . . . , Fn ); that is, F ⊥ (x1 , x2 , . . . , xn ) = ni=1 Fi (xi ) and F + (x1 , x2 , . . . , xn ) = min1  i  n {F1 (x1 ), F2 (x2 ), . . . , Fn (xn )}. Then F ⊥  lodr F +

and

F ⊥  uoir F + .

(2.16)

In order to see it we may assume, by Theorem 2.6, that Fi , i = 1, 2, . . . , n, are all uniform[0, 1] distribution functions. First we show that F ⊥  lodr F + . That is, we need to verify, for x y ∈ [0, 1]n , that  n

 n

xi min {y1 , y2 , . . . , yn } yi min {x1 , x2 , . . . , xn }  i=1

1i n

i=1

1i n

and this is straightforward. The inequality above is reversed if we replace xi by 1 − xi and yi by 1 − yi , but still require x y ∈ [0, 1]n . This shows that F ⊥  uoir F + . A stronger result than (2.16) is given in Example 3.7 below. Example 2.9 (Marshall–Olkin). Let F and G be two n-variate Marshall–Olkin [19] survival functions given, for x 0, by ⎧ n ⎨  i xi − F (x) = exp − ⎩ i=1



i1 i2 (xi1 ∨ xi2 )

1  i1
⎫ ⎬

− · · · − 12···n (x1 ∨ x2 ∨ · · · ∨ xn ) , ⎭ ⎧ n ⎨   i xi − i1 i2 (xi1 ∨ xi2 ) G(x) = exp − ⎩ i=1 1  i1
A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

55

where A = A − A , A ⊆ {1, 2, . . . , n}. For i ∈ {1, 2, . . . , n} we have ⎧ ⎪ ⎪ ⎨   * G(x) (i)(j1 )(j2 ) I(xi > (xj1 ∨ xj2 )) = − i + log (i)(j ) I(xi > xj ) + ⎪ *xi F (x) ⎪ j1 (x1 ∨ · · · ∨ xi−1 ∨ xi+1 ∨ · · · ∨ xn )) , ⎪ ⎪ ⎭ where (i)(j ) denotes ij or j i according to whether i < j or j < i, respectively, and (i)(j1 )(j2 ) is similarly defined. Hence G(x)/F (x) is increasing if, and only if,

i1 + i1 i2

i  0, i1 + i1 i2  0, + i1 i3 + i1 i2 i3  0, .. .  A  0,

i ∈ {1, 2, . . . , n}, {i1 , i2 } ⊆ {1, 2, . . . , n}, {i1 , i2 , i3 } ⊆ {1, 2, . . . , n}, i ∈ {1, 2, . . . , n}.

(2.17)

Ai A⊆{1,2,...,n}

This is a stronger result than the one in Example 5.2 of Hu et al. [13], who only noticed that the nonpositivity of the ’s implies the monotonicity of G(x)/F (x). The ith univariate marginal survival function of F is given by ⎧ ⎛ ⎞ ⎫ ⎪ ⎪ ⎪ ⎪ ⎨ ⎜   ⎟ ⎬ ⎟ ⎜ F i (x) = exp − ⎝i + (i)(j ) + (i)(j1 )(j2 ) + · · · + 12···n ⎠ x ⎪ ⎪ ⎪ ⎪ j1
where (i)(j ) denotes ij or j i according to whether i < j or j < i, respectively, and (i)(j1 )(j2 ) is similarly defined. Hence F and G are in the same Fréchet class if, and only if, 

A = 0,

i ∈ {1, 2, . . . , n}.

Ai A⊆{1,2,...,n}

Thus, F  uoir G if, and only if, the set of inequalities (2.17) holds, with the above equality replacing the last inequality in (2.17). In particular, in the bivariate case, F  uoir G if, and only if, 1 0, 2 0 and 1 + 12 = 2 + 12 = 0; that is, 1 = 2 = −12 0. For example, this is the case when 1 = 2, 2 = 3, 12 = 4, 1 = 1, 2 = 2, and 12 = 5.

56

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

Example 2.10 (Farlie–Gumbel–Morgenstern). Let F1 , F2 , . . . , Fn be n univariate distributions. The general form of an n-dimensional Farlie–Gumbel–Morgenstern distribution in n (F1 , F2 , . . . , Fn ) is given by  n

⎡  F (x1 , x2 , . . . , xn ) = Fi (xi ) ⎣1 + ai1 i2 F i1 (xi1 )F i2 (xi2 ) i=1

1  i1


+

ai1 i2 i3 F i1 (xi1 )F i2 (xi2 )F i3 (xi3 )

1  i1


+ · · · + a12···n F 1 (x1 )F 2 (x2 ) · · · F n (xn )⎦ , (x1 , x2 , . . . , xn ) ∈ Rn ,    where the a’s (whose number is ni=2 ni = 2n − n − 1) are suitable parameters which satisfy the constraints  1+ εi1 εi2 ai1 i2 1  i1


+

εi1 εi2 εi3 ai1 i2 i3 + · · · + ε1 ε2 · · · εn a12···n 0,

1  i1
with εi = ±1, i = 1, 2, . . . , n; see, for example, Kotz et al. [18]. By Theorem 2.6, for the purpose of comparing Farlie–Gumbel–Morgenstern distributions with respect to the orders  lodr and  uoir , we may restrict attention to the case when all the Fi ’s are uniform[0, 1] distribution functions. Then the Farlie–Gumbel–Morgenstern distribution above simplifies to F (x1 , x2 , . . . , xn )  n ⎡ = xi ⎣1 + i=1



+



ai1 i2 (1 − xi1 )(1 − xi2 )

1  i1
ai1 i2 i3 (1 − xi1 )(1 − xi2 )(1 − xi3 )

1  i1


+ · · · + a12···n (1 − x1 )(1 − x2 ) · · · (1 − xn )⎦ ,

(x1 , x2 , . . . , xn ) ∈ [0, 1]n .

The corresponding survival function is given by F (x1 , x2 , . . . , xn )  n

⎡ = (1 − xi ) ⎣1 +

ai1 i2 xi1 xi2

1  i1
i=1









ai1 i2 i3 xi1 xi2 xi3 + · · · + (−1) a12···n x1 x2 · · · xn ⎦ ,

1  i1
(x1 , x2 , . . . , xn ) ∈ [0, 1]n .

n

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

57

In the sequel we identify some instances in which Farlie–Gumbel–Morgenstern distributions are ordered with respect to the orders  lodr and  uoir . Example 2.10(a). In Example 2.10, let a12···n = , and let all the other a’s be 0, where ||1. That is, let  n 

n 1 +  (1 − xi ) , xi F() (x1 , x2 , . . . , xn ) = F () (x1 , x2 , . . . , xn ) =

i=1 n

 i=1 (1 − xi )

1 + (−1)  n

n

i=1

xi ,

i=1

(x1 , x2 , . . . , xn ) ∈ [0, 1]n . Now, for 1 and 2 such that |i |1, i = 1, 2, and x y, the inequality F(2 ) (x) F(2 ) (y)  F(1 ) (x) F(1 ) (y) reduces to (2 − 1 )

 n

(1 − xi ) −

i=1

n

(1 − yi ) 0.

i=1

So F(1 )  lodr F(2 ) if, and only if, −11 2 1. On the other hand, the inequality F (2 ) (x) F (1 ) (x)



F (2 ) (y) F (1 ) (y)

reduces to [(−1)n 2 − (−1)n 1 ]

 n i=1

yi −

n

xi 0.

i=1

So F(1 )  uoir F(2 ) if, and only if, −1(−1)n 1 (−1)n 2 1, that is, −1 1 2 1 when n is even, and −1 2 1 1 when n is odd. For n = 2, both inequalities above strengthen a result that is stated in Exercise 3.22 in Nelsen [23, p. 77]. Example 2.10(b). In Example 2.10 let all the a’s be equal to , where ||  k1 and k = 2n − n − 1. That is, let ⎡ ⎡ n  F () (x1 , x2 , . . . , xn ) = (1 − xi ) ⎣1 +  ⎣ xi1 xi2 1  i1
i=1





⎤⎤

xi1 xi2 xi3 + · · · + (−1)n x1 x2 · · · xn ⎦⎦ ,

1  i1
(x1 , x2 , . . . , xn ) ∈ [0, 1]n .

58

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

Noting that n

(1 − xi ) = 1 −

i=1

n 



xi +





xi1 xi2

1  i1
i=1

xi1 xi2 xi3 + · · · + (−1)n

1  i1
n

xi ,

i=1

it follows that F () (x1 , x2 , . . . , xn )   n

n n  (1 − xi ) 1 +  (1 − xi ) + xi − 1 , = i=1

i=1

(x1 , x2 , . . . , xn ) ∈ [0, 1]n .

i=1

Proceeding as in the previous example, it is seen, for x y, that the inequality F (2 ) (y) F (1 ) (y)

F (2 ) (x) F (1 ) (x)



reduces to

 n

 n n n   (1 − yi ) + yi − 1 − (1 − xi ) + xi − 1 (2 − 1 ) i=1

i=1

i=1

Now we will show that i=1 (1 − xi ) + xj yj , and fix the other xi ’s, i = j . Then n

(1 − xi ) +

i=1

n 

xi − 1 =

i=1





xi +

i=1 xi



i =j

i =j





xi +

i =j

= (1 − yj )

i=1

(2.18)

n

n

− 1 is increasing in xj for each j. Let ⎡

(1 − xi ) + xj ⎣1 − (1 − xi ) + yj ⎣1 −

(1 − xi ) + yj +

i =j



⎤ (1 − xi )⎦ − 1

i =j



i =j



 0.





⎤ (1 − xi )⎦ − 1

i =j

xi − 1

i =j

and the above claim follows. Hence, from (2.18) it follows that F(1 )  uoir F(2 ) if, and only if, 1 2 . It is also true in this example that F(1 )  lodr F(2 ) if, and only if, 1 2 . To see it, write  n

F() (x1 , x2 , . . . , xn ) = xi [1 + g(x1 , x2 , . . . , xn )], i=1

where g(x) =

 1  i1
(1 − xi1 )(1 − xi2 ) +

 1  i1
+ · · · + (1 − x1 )(1 − x2 ) · · · (1 − xn ).

(1 − xi1 )(1 − xi2 )(1 − xi3 )

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

59

It is easy to verify that F(1 )  lodr F(2 ) if, and only if, (2 − 1 )g(x) (2 − 1 )g(y),

xy.

And since g(x) is decreasing in x, it follows that the above condition holds if, and only if, 1 2 . An observation that is useful for the identification of the orders  lodr and  uoir in parametric families of multivariate distributions is the following. Let {F } be a parametric family of n-dimensional distributions, all in the same Fréchet class, where the parameter space is a subset of R. Then F  lodr F for all   if, and only if, F (x1 , x2 , . . . , xn ) F (x1 , x2 , . . . , xn )

is decreasing in x1 , x2 , . . . , xn when  ;

that is (if the partial derivatives below exists), if, and only if, * log F (x1 , x2 , . . . , xn ) is decreasing in  for i = 1, 2, . . . , n. *xi

(2.19)

Similarly, F  uoir F if, and only if, * log F  (x1 , x2 , . . . , xn ) is increasing in  for i = 1, 2, . . . , n *xi (this really means that F  (x1 , x2 , . . . , xn ) is TP2 (totally positive of order 2) in (, xi ) for i = 1, 2, . . . , n). Example 2.11 (Ali–Mikhail–Haq). Consider the family of bivariate copulae C defined by C (u, v) =

uv , 1 − (1 − u)(1 − v)

(u, v) ∈ (0, 1)2 ,

where ||1 (see [23]). Denote the corresponding survival copulae (see a note after Corollary 2.7) by D . Their survival functions are given by D  (u, v) = C (1 − u, 1 − v) =

(1 − u)(1 − v) , 1 − uv

(u, v) ∈ (0, 1)2 .

We will show that C  lodr C whenever −1 1. In order to see it we compute 1 − (1 − v) * log C (u, v) = u(1 − (1 − u)(1 − v)) *u and this is decreasing in  ∈ [0, 1]. Similarly, **v log C (u, v) is decreasing in  ∈ [0, 1]. The claim thus follows from (2.19). The inequality C  lodr C , −11, strengthens the result in Exercise 2.31 in Nelsen [23, p. 35]. Using Corollary 2.7 we also see that D  uoir D whenever −1  1.

60

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

Example 2.12 (Gumbel). Consider the family of bivariate copulae C defined by C (u, v) = uv exp{− log u log v},

(u, v) ∈ (0, 1)2

and the corresponding survival copulae D with survival functions D  (u, v) = (1 − u)(1 − v) exp{− log(1 − u) log(1 − v)},

(u, v) ∈ (0, 1)2 ,

where  ∈ (0, 1] (again, see [23]). We will show that C  lodr C whenever  . To see it we compute 1 −  log v * log C (u, v) = u *u and this is increasing in . Similarly, **v log D  (u, v) is increasing in . The claim thus follows from (2.19). The inequality C  lodr C , , strengthens the result in Example 4.9 in Nelsen [23, p. 108]). Using Corollary 2.7 we also see that D  uoir D whenever  . Example 2.13 (Cuadras–Augé). Consider the family of bivariate copulae C defined by C (u, v) = [min{u, v}] (uv)1− ,

(u, v) ∈ (0, 1)2 ,

where 0  1 (see [23, p. 12]). Let  . Then ! − C (u, v) , 0 u v 1, v =  − , 0 v u 1. u C (u, v) This is decreasing in u and v, and therefore C  lodr C . Let us recall the definition of Archimedean copulae. Let  be a continuous, strictly decreasing function from [0, 1] to [0, ∞] such that (1) = 0. The pseudo-inverse of  is the function [−1] given by ! −1  (t), 0 t (0), [−1]  (t) = 0, (0)t ∞. If  above is also convex, then the function C, defined by C(u, v) = [−1] ((u) + (v)),

(u, v) ∈ [0, 1]2 ,

is a copula, and it is called an Archimedean copula with generator . Nelsen [23, Section 4.4] proved that two Archimedean copulae C1 and C2 , with generators 1 and 2 , respectively, satisfy C1  PQD C2 if, and only if, the corresponding generators is subadditive. It is well known that concavity implies subadditivity, satisfy that 1 [−1] 2 and therefore concavity implies that C1  PQD C2 . Let D1 and D2 be the survival copulae associated with C1 and C2 , respectively; that is, the copulae with survival functions D i (u, v) = Ci (1 − u, 1 − v),

(u, v) ∈ (0, 1)2 , i = 1, 2.

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

61

Note that C1  PQD C2 ⇐⇒ D1  PQD D2 . implies that D1  PQD D2 . In light Therefore subadditivity (and hence concavity) of 1 [−1] 2 of (2.9) and (2.10), one may wonder whether concavity, or even subadditivity, imply any of the stronger relations C1  lodr C2 , C1  uoir C2 , D1  lodr D2 , or D1  uoir D2 . The answer to this question is negative. In order to see that, it suffices to obtain an example in which 1 [−1] is concave, but such that the underlying copulae and survival copulae are not 2 ordered with respect to the orders  lodr and  uoir . Example 2.14. Consider the generator  given by  (u) = (− log u) ,

u ∈ (0, 1],

with 1. Then −1 1/ }, [−1]  (t) =  (t) = exp{−t

t 0.

For 1 2 we have 1 /2 , 1 −1 2 (t) = t

t 0.

Obviously 1 −1 2 is concave. The copulae that are associated with the generators 1 and 2 are given by   1/i  Ci (u, v) = exp − (− log u)i + (− log v)i , (u, v) ∈ [0, 1]2 , i = 1, 2. In order for C1  lodr C2 to hold, it is necessary that C2 (u, v) C1 (u, v)

be decreasing in (u, v) ∈ [0, 1]2 .

(2.20)

Let 1 = 2 and 2 = 3. Then (2.20) is equivalent to the requirement that 1/3    exp − (− log u)3 + (− log v)3 gv (u) ≡ 1/2  is increasing in u ∈ [0, 1]   exp − (− log u)2 + (− log v)2 for every v ∈ [0, 1]. But this fails, for example, for v = 0.8, since g0.8 (0.6) ≈ 1.033 and g0.8 (0.7) ≈ 1.037. From Corollary 2.7 it follows that also the corresponding survival copulae do not satisfy D1  uoir D2 . In order for C1  uoir C2 to hold, it is necessary that C 2 (u, v) C 1 (u, v)

be increasing in (u, v) ∈ [0, 1]2 .

This is equivalent to 1/2    1 − u − v + exp − (− log u)2 + (− log v)2 hv (u) ≡ 1/1    1 − u − v + exp − (− log u)1 + (− log v)1

62

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

is increasing in u ∈ [0, 1] for all v ∈ [0, 1]. Taking 1 = 2, 2 = 3 and v = 0.8 we see that h0.8 (0.8) ≈ 1.197 and h0.8 (0.9) ≈ 1.155, so C1 uoir C2 . From Corollary 2.7 it follows that also the corresponding survival copulae do not satisfy D1  lodr D2 . 2.4. Relationships to other orders In (2.9) and (2.10) it is shown that, in the bivariate case, the positive dependence orders  lodr and  uoir are stronger than the order  PQD . A practical generalization of the order  PQD , when the compared random vectors are of dimension n > 2, is the supermodular order  sm . We will not reproduce the definition of this order here; the reader may find it in Szekli et al. [29] or in Shaked and Shanthikumar [27]. We will note, however, that " # X  sm Y ⇒ P {X x} P {Y x} for all x . (2.21) In the next example it is shown that, in general, X  uoir Y does not imply that P {X x} P {Yx} for all x, and therefore, when the compared random vectors are of dimension n 3, we have X uoir Y ⇒ X  sm Y. Example 2.15. Let (Y1 , Y2 , Y3 ) be a random vector such that P {Y1 = 0, Y2 = 0, Y3 = 1} = 0.2, P {Y1 = 1, Y2 = 1, Y3 = 1} = 0.5, and P {Y1 = 0, Y2 = 1, Y3 = 0} = P {Y1 = 1, Y2 = 0, Y3 = 0} = P {Y1 = 1, Y2 = 1, Y3 = 0} = 0.1. Note that Y1 , Y2 , Y3 are identically distributed Bernoulli random variables with parameter 0.7. Let X1 , X2 , X3 be independent and identically distributed Bernoulli random variables with parameter 0.7. A straightforward calculation shows that X  uoir Y. However, P {Y1 0, Y2 0, Y3 0} = 0 < (0.3)2 = P {X1 0, X2 0, X3 0}, and thus the right-hand side of (2.21) does not hold. In a similar manner it can also be shown that X lodr Y ⇒ X  sm Y. Another positive dependence order, which is used to compare bivariate random vectors with the same marginal distributions, and which is then stronger than the order  PQD , is the TP2 order which was introduced and studied in Kimeldorf and Sampson [16], and was further discussed in Metry and Sampson [20]. When the compared random vectors (X1 , X2 ) and (Y1 , Y2 ), with distributions F and G in 2 (F1 , F2 ), have, with respect to some dominating product measure, densities f and g, respectively, then we denote (X1 , X2 )  TP2 (Y1 , Y2 ) (or F  TP2 G) if the following condition holds: f (x1 , y1 )f (x2 , y2 )g(x1 , y2 )g(x2 , y1 ) f (x1 , y2 )f (x2 , y1 )g(x1 , y1 )g(x2 , y2 ), whenever x1 x2 and y1 y2 . (2.22) It turns out that the order  TP2 neither implies, nor is implied by, the orders  lodr and  uoir . To see it, note that if F ∈ 2 (F1 , F2 ) then F  TP2 F + (see [16]). Thus, in light of Example 2.4 we see that (X1 , X2 )  TP2 (Y1 , Y2 ) ⇒ (X1 , X2 )  lodr (Y1 , Y2 ), (X1 , X2 )  TP2 (Y1 , Y2 ) ⇒ (X1 , X2 )  uoir (Y1 , Y2 ).

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

63

Later, in Example 3.8 below, it will be shown that (X1 , X2 )  lodr (Y1 , Y2 ) ⇒ (X1 , X2 )  TP2 (Y1 , Y2 ), (X1 , X2 )  uoir (Y1 , Y2 ) ⇒ (X1 , X2 )  TP2 (Y1 , Y2 ). Averous and Dortet-Bernadet [1] introduced and studied two interesting bivariate positive dependence orders that we now define. For any random vector (X1 , X2 ), with distribution function F, we define the conditional distribution functions FxL and FxR by   (2.23) FxL (y) = P {X2 y X1 x}, FxR (y) = P {X2 y X1 > x} for x’s for which these conditional distribution functions are defined. If (X1 , X2 ) and (Y1 , Y2 ), with distribution functions F and G in 2 (F1 , F2 ), satisfy L −1 L L −1 GL x  (Gx ) (u)Fx  (Fx ) (u),

u ∈ [0, 1],

(2.24)

whenever x x  , then (X1 , X2 ) is said to be smaller than (Y1 , Y2 ) in the left tail decreasing order (LTD; see Section 2.5 below for more about this concept) and we denote this by (X1 , X2 ) LTD (Y1 , Y2 ) or F  LTD G. If F and G satisfy R R −1 R −1 GR x  (Gx ) (u)Fx  (Fx ) (u),

u ∈ [0, 1],

whenever x x  , then (X1 , X2 ) is said to be smaller than (Y1 , Y2 ) in the right tail increasing order (RTI; again, see Section 2.5 below for more about this concept) and we denote this by (X1 , X2 ) RTI (Y1 , Y2 ) or F  RTI G. If C1 and C2 are two bivariate Archimedean copulae with respective generators 1 and 2 , then Averous and Dortet-Bernadet [1] showed that C1  LTD C2 if, and only if, 1 [−1] 2 is concave. It follows from Example 2.14 that (X1 , X2 ) LTD (Y1 , Y2 ) ⇒ (X1 , X2 )  lodr (Y1 , Y2 ).

(2.25)

In Example 2.4 we saw that, in general, F and F + need not be comparable with respect to the order  uoir . On the other hand, Averous and Dortet-Bernadet [1] showed that in the bivariate case we have F  RTI F + , provided some regularity conditions hold. It follows that (X1 , X2 ) RTI (Y1 , Y2 ) ⇒ (X1 , X2 )  uoir (Y1 , Y2 ).

(2.26)

Another way to obtain implication (2.26) is to note that (X1 , X2 ) RTI (Y1 , Y2 ) ⇐⇒ (−X1 , −X2 )  LTD (−Y1 , −Y2 ).

(2.27)

Using (2.27), it is easy to see that (2.26) follows from (2.25) and Theorem 2.1. In the next example we show that (X1 , X2 ) lodr (Y1 , Y2 ) ⇒ (X1 , X2 )  LTD (Y1 , Y2 ).

(2.28)

This is in contrast to the result in Theorem 3.9 below. Example 2.16. Consider two random vectors (X1 , X2 ) and (Y1 , Y2 ) with distribution functions in 2 (F1 , F2 ), where X1 and Y1 are discrete random variables taking on the values 0, 1,

64

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

and 2, with respective probabilities 1/8, 3/40, and 4/5, while F2 is uniform[0, 2]. Define the distribution functions F and G of (X1 , X2 ) and (Y1 , Y2 ) through the conditional distribution functions of X2 given X1 , and of Y2 given Y1 , as follows: ⎧ 2 ⎪ y, y ∈ [0, .6), ⎪ ⎨3  3 1 P {X2 y X1 = 0} = 2 y − 2 , y ∈ [.6, 1), ⎪ ⎪ ⎩ 1, y 1, ⎧ 3 ⎪ y, y ∈ [0, .4), ⎪ ⎨2  2 1  P {Y2 y Y1 = 0} = 3 y + 3 , y ∈ [.4, 1), ⎪ ⎪ ⎩ 1, y 1, ⎧ 10 ⎪ y, y ∈ [0, .6), ⎪ ⎨ 9  5 1 P {X2 y X1 = 1} = 6 y + 6 , y ∈ [.6, 1), ⎪ ⎪ ⎩ 1, y 1, ⎧ 3 ⎪ y, y ∈ [0, .4), ⎪ ⎨2  P {Y2 y Y1 = 1} = 2y − 15 , y ∈ [.4, .6), ⎪ ⎪ ⎩ 1, y .6, ⎧ 5 ⎪ y ∈ [0, .6), ⎪ 12 y, ⎪ ⎪ ⎪ 1 5 ⎨  16 y + 16 , y ∈ [.6, 1), P {X2 y X1 = 2} = ⎪ ⎪ 58 y − 41 , y ∈ [1, 2), ⎪ ⎪ ⎪ ⎩ 1, y 2, ⎧ 1 ⎪ y ∈ [0, .4), ⎪ 4 y, ⎪ ⎪ ⎪ 1 1 ⎪ ⎪ y − 30 , y ∈ [.4, .6), ⎪ ⎨3  25 7 P {Y2 y Y1 = 2} = 48 y − 48 , y ∈ [.6, 1), ⎪ ⎪ ⎪ 5 1 ⎪ ⎪ y ∈ [1, 2), ⎪ 8y − 4, ⎪ ⎪ ⎩ 1, y 2.    Note that F0L (y) = P {X2 y X1 = 0} and GL 0 (y) = P {Y2 y Y1 = 0}, given above. A straightforward computation gives ⎧ 5 ⎪ y, y ∈ [0, .6), ⎪ ⎨6 L 5 1 F1 (y) = 4 y − 4 , y ∈ [.6, 1), ⎪ ⎪ ⎩ 1, y 1,

⎧ 3 ⎪ ⎪ 2 y, ⎪ ⎪ ⎪ ⎨ 7y + 2 , 6 15 GL (y) = 1 ⎪ 5y+ 7, ⎪ ⎪ 12 12 ⎪ ⎪ ⎩ 1,

y ∈ [0, .4), y ∈ [.4, .6), y ∈ [.6, 1), y 1.

Finally we note that F2L and GL 2 are uniform[0, 2] distribution functions.

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

65

7 L −1 For u = .8, x = 0, and x  = 1 we have (F0L )−1 (u) = 13 15 and (G0 ) (u) = 10 , so that 7 L −1 F1L (F0L )−1 (u) = 56 and GL 1 (G0 ) (u) = 8 . Thus (2.24) fails; that is, F LTD G. On the other hand, we will now show that h(x, y) = G(x, y)/F (x, y) is decreasing in (x, y) whenever G(x, y) > 0. Indeed, ⎧ GL (y) ⎪ g0 (y) = L0 if (x, y) ∈ [0, 1) × (0, 2], ⎪ ⎪ F0 (y) ⎪ ⎨ h(x, y) = g (y) = GL1 (y) if (x, y) ∈ [1, 2) × (0, 2], 1 ⎪ ⎪ F1L (y) ⎪ ⎪ ⎩ 1 if x 2 or y > 2,

where

g0 (y) =

⎧9 ⎪ 4, ⎪ ⎪ ⎪ ⎪ ⎨1+ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

2 3

1

·

y ∈ [0, .4), 1 2y ,

2y+1 3y−1 ,

y ∈ [.4, .6), y ∈ [.6, 1),

and

g1 (y) =

y 1

⎧9 ⎪ 5, ⎪ ⎪ ⎪ ⎪ ⎨7+ 5 ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

y ∈ [0, .4), 4 25y ,

y ∈ [.4, .6),

5y+7 15y−3 ,

y ∈ [.6, 1),

1,

y 1.

Both g0 and g1 are decreasing in y. Hence, for a fixed x, we have that h(x, y) is decreasing in y. It is also easy to show that g0 (y)g1 (y) for all y, so that F  lodr G. Using (2.27), it is seen from (2.28) and Theorem 2.1 that (X1 , X2 ) uoir (Y1 , Y2 ) ⇒ (X1 , X2 )  RTI (Y1 , Y2 ).

(2.29)

Hollander et al. [12] introduced two other LTD and RTI orders whose definition we now recall. Take the notation FxL and FxR from (2.23). If (X1 , X2 ) and (Y1 , Y2 ), with distribution functions F and G in 2 (F1 , F2 ), satisfy L FxL (y) − FxL (y)GL x (y) − Gx  (y),

y ∈ R,

(2.30)

whenever x x  , then (X1 , X2 ) is said to be smaller than (Y1 , Y2 ) in the LTD order à la Hollander et al. [12], and we denote this by (X1 , X2 )  ∗LTD (Y1 , Y2 ) or F  ∗LTD G. If F and G satisfy R FxR (y) − FxR (y)GR x (y) − Gx  (y),

y ∈ R,

whenever x x  , then (X1 , X2 ) is said to be smaller than (Y1 , Y2 ) in the RTI order à la Hollander et al. [12] and we denote this by (X1 , X2 )  ∗RTI (Y1 , Y2 ) or F  ∗RTI G. It is of interest to mention here that the orders  LTD and  ∗LTD do not imply each other. Also  RTI and  ∗RTI do not imply each other; see Colangelo [6]. In the next example we show that the order  ∗LTD does not always have the Fréchet lower bound as a minimal element in the given Fréchet class, and therefore, by Proposition 2.3, (X1 , X2 ) lodr (Y1 , Y2 ) ⇒ (X1 , X2 )  ∗LTD (Y1 , Y2 ). This is in contrast to the result in Theorem 3.10 below.

(2.31)

66

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

Example 2.17. Let (X1 , X2 ) be a random vector with probability mass function

Denote the distribution function of (X1 , X2 ) by G, and denote the corresponding Fréchet lower bound by F. Let y = 2, x = 1, and x  = 2. Then FxL (y) = FxL (y) = 0, GL x (y) = 1/2, ∗ and GL (y) = 2/3. So (2.30) fails, and therefore F  G. LTD x It is not hard to verify that (X1 , X2 )  ∗RTI (Y1 , Y2 ) if, and only if, (−X1 , −X2 )  ∗LTD (−Y1 , −Y2 ). Thus, it follows from (2.31) and Theorem 2.1 that (X1 , X2 ) uoir (Y1 , Y2 ) ⇒ (X1 , X2 )  ∗RTI (Y1 , Y2 ).

(2.32)

In the next example we show that (X1 , X2 ) ∗LTD (Y1 , Y2 ) ⇒ (X1 , X2 )  lodr (Y1 , Y2 ).

(2.33)

Example 2.18. Let (X1 , X2 ) and (Y1 , Y2 ) be random vectors in the same Fréchet class with probability mass functions

Denote the distribution function of (X1 , X2 ) by F, and the one of (Y1 , Y2 ) by G. Since G(1,1) G(1,2) 5 F (1,1) = 2 < 2 = F (1,2) , it follows that F lodr G. Computing FxL and GL x we obtain L L y F1L (y) F2L (y) F3L (y) GL 1 (y) G2 (y) G3 (y)

1 2 3

1/6 1/3 1

1/6 2/3 1

1/8 5/8 1

1/3 5/6 1

1/6 2/3 1

1/8 5/8 1

It is not difficult to check that (2.30) holds; that is, F  ∗LTD G. Again, using the fact that (X1 , X2 )  ∗RTI (Y1 , Y2 ) if, and only if, (−X1 , −X2 )  ∗LTD (−Y1 , −Y2 ), it follows from (2.33) and Theorem 2.1 that (X1 , X2 ) ∗RTI (Y1 , Y2 ) ⇒ (X1 , X2 )  uoir (Y1 , Y2 ).

(2.34)

Capéraà et al. [5] introduced and studied an interesting bivariate positive dependence order. We will not reproduce its definition here, but let us denote it by  CFG . If C1 and C2 are two bivariate Archimedean copulae with respective generators 1 and 2 , then Capéraà

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

67

is antistarshaped (that is, 1 /2 et al. [5] showed that C1  CFG C2 if, and only if, 1 [−1] 2 is increasing). Since concavity implies antistarshapedness, it follows from Example 2.14 that in the bivariate case (X1 , X2 ) CFG (Y1 , Y2 ) ⇒ (X1 , X2 )  lodr (Y1 , Y2 ).

(2.35)

In fact, Capéraà et al. [5] showed that (X1 , X2 )  CFG (Y1 , Y2 ) ⇒ (X1 , X2 )  PQD (Y1 , Y2 ). Thus (2.35) also follows from (2.9). In a similar manner we get from (2.10) that (X1 , X2 ) CFG (Y1 , Y2 ) ⇒ (X1 , X2 )  uoir (Y1 , Y2 ). 2.5. Applications A stochastic order is useful in applications if it holds in many examples, and if it implies useful inequalities. We will now indicate that the orthant ratio orders indeed have these properties. Indeed, a host of examples of random vectors that are ordered with respect to the orthant ratio orders are given in Section 2.3. On the other hand, the implications (2.9) and (2.10) yield a host of useful inequalities that follow from the order  PQD that holds between any two corresponding bivariate marginals. For example, in the bivariate setting, the order  PQD implies that the corresponding Pearson’s correlations, Kendall’s ’s, Spearman’s ’s, and Blomqvist q’s, are ordered in the same direction (see, [17]). Therefore, by (2.9) and (2.10), the orders  lodr and  uoir have the same useful implication for any two corresponding bivariate marginals. The order  uoir implies the multivariate weak hazard rate order of Hu et al. [13]. Thus, every application in that paper, in which the compared random vectors have the same marginals, follows if the compared random vectors are ordered with respect to  uoir . For example, if X = (X1 , X2 , . . . , Xn ) is a vector of random lifetimes, and if Y = (Y1 , Y2 , . . . , Yn ) is another vector of random lifetimes, such that X  uoir Y, then a series system with component lifetimes X1 , X2 , . . . , Xn has stochastically shorter lifetime, in the sense of the univariate hazard rate stochastic order, than a series system with component lifetimes Y1 , Y2 , . . . , Yn . In a similar manner it can be shown that if X  lodr Y, then a parallel system with component lifetimes X1 , X2 , . . . , Xn has stochastically longer lifetime, in the sense of the univariate reversed hazard rate stochastic order, than a parallel system with component lifetimes Y1 , Y2 , . . . , Yn . In fact, the above comparisons hold even if the component lifetimes are scaled in order to model improvements (see [13]). The orthant ratio orders are also useful as tools to model positive dependence notions. In general, let  be some positive dependence order, and let F be some distribution function in n (F1 , F2 , . . . , Fn ). A common idea for defining F as a “positive dependent” distribution is by requiring it to satisfy F F ⊥ , where F ⊥ is the member in n (F1 , F2 , . . . , Fn ) that corresponds to the independence case. For example, in the bivariate case, F  PQD F ⊥ means that F is ‘positive quadrant dependent.’ Furthermore, in the bivariate case, if F has a density f, then F  TP2 F ⊥ means that f is totally positive of order 2 (TP2 ). Below we make some observations on the meaning of F  lodr F ⊥ and F  uoir F ⊥ . We consider only the bivariate case, and by Theorem 2.6 it suffices to consider only the case when F is a copula (to emphasize this we will denote below F by C).

68

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

First we recall (see, for example, [3, p. 142]) that a random variable X is said to be left  tail decreasing in Y (denoted as LTD(X Y )) if P {X x Y y} is decreasing in y for  all x. Y )) if P {X > x Y > y} Also, X is said to be right tail increasing in Y (denoted as RTI(X  is increasing in y for all x. Both the LTD(X Y ) and RTI(X Y ) concepts indicate positive dependence between X and Y. Both concepts are invariant under increasing transformations of the individual underlying random variables. Proposition 2.19. Let (U, V ) be distributed according to the copula C. Then   (a) C  lodr C ⊥ if, and only if, LTD(UV ) and LTD(V U ). (b) C  uoir C ⊥ if, and only if, RTI(U V ) and RTI(V U ). Proof. Write C  lodr C ⊥ as C(u, v) uv

is decreasing in (u, v) ∈ (0, 1]2 ,

(2.36)

since (0, 1]2 = {(u, v) : C ⊥ (u, v) > 0} = {(u, v) : C(u, v) > 0} (see a comment after (2.4)). For a fixed v ∈ (0, 1] that means  P {V v U u} is decreasing in u ∈ (0, 1] and for a fixed u ∈ (0, 1] that means  P {U uV v} is decreasing in v ∈ (0, 1].

  These two conditions together are equivalent to LTD(V U ) and LTD(U V ). The proof of (b) is similar. 

Some extensions of the implications ⇐ in Proposition 2.19, when the compared random vectors are of dimension n > 2, can be found in Colangelo [6] and in Colangelo et al. [7]. These extensions involve the notions of LTDS (left tail decreasing in sequence), RTIS (right tail increasing in sequence), and CIS (conditionally increasing in sequence); the definitions of these positive dependence notions can be found in Block and Ting [4] and in Ebrahimi and Ghosh [11]. From Proposition 2.19 we obtain a nice characterization of Archimedean copulae that are LTD. Proposition 2.20. Let (U, V ) be to the Archimedean copula C with  distributed according  generator . Then C is LTD(U V ) and LTD(V U ) if, and only if, −1 is logconvex. Proof. Here C is of the form C(u, v) = [−1] ((u) + (v)), (u, v) ∈ [0, 1]2 . Since {(u, v) : C(u, v) > 0} = (0, 1]2 (see the proof of Proposition 2.19), the generator  must be strict (that is, (0) = ∞; see [23, p. 92]). In such a case [−1] = −1 . Therefore (2.36) is the same as log[−1 ((u) + (v))] − log u − log v

is decreasing in (u, v) ∈ (0, 1]2 .

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

69

This is the same as (put x = (u) and y = (v)) log −1 (x + y) − log −1 (x) − log −1 (y) that is, −1 is logconvex.

is increasing in (x, y) ∈ (0, ∞)2 ,



Proposition 2.20 can also be proven by applying Proposition 3 of Averous and DortetBernadet [1] to the special case when one of the compared copulae is C ⊥ . Proposition 2.20 is of interest in comparison to results of Müller and Scarsini [21]. They obtained stronger positive dependence properties for Archimedean copulae, but under stronger conditions. 3. The strong orthant ratio orders Let X = (X1 , X2 , . . . , Xn ) and Y = (Y1 , Y2 . . . . , Yn ) be two random vectors with respective survival functions F and G. According to Hu et al. [13], X is said to be smaller than Y in the (strong) multivariate hazard rate order if F (x)G(y)F (x ∧ y)G(x ∨ y),

x, y ∈ Rn .

(3.1)

Hu et al. [13] did not assume that X and Y have distribution functions in the same Fréchet class. In this section, if, in addition to (3.1), X and Y have the same marginal distributions, then we will say that X is smaller than Y in the strong upper orthant increasing ratio order, and we denote that by X  suoir Y or by F  suoir G. Similarly, if F and G are in the same Fréchet class, and if F (x)G(y)F (x ∨ y)G(x ∧ y),

x, y ∈ Rn ,

(3.2)

then we will say that X is smaller than Y in the strong lower orthant decreasing ratio order, and we denote that by X slodr Y or by F  slodr G. Obviously, by choosing x y in (3.1) we get (2.5), and by choosing x y in (3.2) we get (2.1), that is, X suoir Y ⇒ X uoir Y

and

X  slodr Y ⇒ X  lodr Y.

(3.3)

Thus the orders  slodr and  suoir are often useful as a tool to identify random vectors that are ordered with respect to the orders  lodr and  uoir . The two orders  slodr and  suoir are closely related, and are preserved under strictly increasing transformations, as is indicated in the next analog of Theorems 2.1 and 2.2. The proof of the next result is straightforward, and is therefore omitted. Theorem 3.1. Let X = (X1 , X2 , . . . , Xn ) and Y = (Y1 , Y2 , . . . , Yn ) be two random vectors in the same Fréchet class. (a) If X  slodr Y then (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  suoir (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for any decreasing functions 1 , 2 , . . . , n . Conversely, if (1 (X1 ), 2 (X2 ), . . . , n (Xn )) suoir (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for some strictly decreasing functions 1 , 2 , . . . , n , then X  slodr Y.

70

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

(b) If X  suoir Y then (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  slodr (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for any decreasing functions 1 , 2 , . . . , n . Conversely, if (1 (X1 ), 2 (X2 ), . . . , n (Xn )) slodr (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for some strictly decreasing functions 1 , 2 , . . . , n , then X suoir Y. (c) If X slodr Y then (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  slodr (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for any increasing functions 1 , 2 , . . . , n . Conversely, if (1 (X1 ), 2 (X2 ), . . . , n (Xn )) slodr (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for some strictly increasing functions 1 , 2 , . . . , n , then X slodr Y. (d) If X suoir Y then (1 (X1 ), 2 (X2 ), . . . , n (Xn ))  suoir (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for any increasing functions 1 , 2 , . . . , n . Conversely, if (1 (X1 ), 2 (X2 ), . . . , n (Xn )) suoir (1 (Y1 ), 2 (Y2 ), . . . , n (Yn )) for some strictly increasing functions 1 , 2 , . . . , n , then X suoir Y. The following analog of Theorem 2.6 shows that, in many cases, establishing the order  slodr or the order  suoir between two distribution functions, in the same Fréchet class, is equivalent to establishing this order between the corresponding copulae; this is a desirable property that any positive dependence order should satisfy. Theorem 3.2. Let X and Y have, respectively, distribution functions F and G in n (F1 , F2 , . . . , Fn ). Then X  slodr Y [X  suoir Y] if, and only if, there exist copulae CF and CG (as in (2.13)) such that CF  slodr CG [CF  suoir CG ]. A useful corollary of Theorem 3.1 is the following analog of Corollary 2.7. Corollary 3.3. Let U and V be two random vectors whose distribution functions are copulae. Then U slodr V ⇐⇒ 1 − U  suoir 1 − V, U  suoir V ⇐⇒ 1 − U  slodr 1 − V. The orders  slodr and  suoir satisfy most of the postulates given in Section 2. Postulate P6 is easy to establish. For the order  suoir , Postulates P4 and P5 follow from results of Hu et al. [13]; this fact, together with Theorem 3.1, also implies that the order  slodr satisfies Postulates P4 and P5. Postulate P7 (for both orders) follows from Theorem 3.1. Postulate P2 (for both orders) follows from (3.3) and from the fact that the orders  lodr and  uoir satisfy Postulate P2. Postulate P3, for  slodr , follows from (3.3) and (2.9), and for  suoir it follows from (3.3) and (2.10). Postulate P1 does not hold in general. In fact, a random vector X satisfies X suoir X if, and only if, its survival function F is MTP2 (multivariate totally positive of order 2; that is, F (x)F (y)F (x ∧ y)F (x ∨ y)); see Hu et al. [13] for details. Similarly, Postulate P1 does not hold for  slodr . In Colangelo [6] it is shown that the orders  suoir and  slodr are transitive. The order  suoir does not satisfy Postulate P8. Firstly, the upper Fréchet bound does not always dominate every distribution, in the same Fréchet class, with respect to the order  suoir —this follows from (3.3), and from the fac, shown in Section 2, that the upper Fréchet

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

71

bound does not always dominate every distribution, in the same Fréchet class, with respect to the order  uoir . Similarly, the order  slodr does not satisfy Postulate P8. Secondly, in the bivariate case, the lower Fréchet bound is not always dominated by every distribution, in the same Fréchet class, with respect to the order  suoir , or with respect to the order  slodr . This is shown in the next example. Example 3.4. Let (X1 , X2 ), with distribution function F, take on the values (1, 1), (1, 2), (2, 1), (2, 2) with probabilities 1/10, 1/5, 3/5, and 1/10. Let x = (1, 0) and y = (0, 1), and let F − denote the lower Fréchet bound. Then F − (x) = 3/10, F (y) = 7/10, F − (x∧y) = 1, and F (x ∨ y) = 1/10. So F − suoir F . Taking G to be the distribution function of (−X1 , −X2 ), it can be verified that G− slodr G. It is of interest to contrast this example with Proposition 2.3. The converses of the implications in (3.3) are not true in general. However, under an additional assumption they are valid; these are given in the following proposition. We note that at the first glance, the following proposition seems to contradict Counterexample 2.3 of Hu et al. [13]. However, in that counterexample, the compared random vectors are not in the same Fréchet class. Proposition 3.5. Let X and Y be two n-dimensional random vectors in the same Fréchet class, with survival functions F and G. (a) If F and/or G is/are MTP2 then X  uoir Y ⇒ X  suoir Y. (b) If F and/or G is/are MTP2 then X  lodr Y ⇒ X slodr Y. Proof. First we prove part (a) when G is MTP2 ; this portion of the proof is similar to the proof of Theorem 2.1 of Hu et al. [13]. Fix x, y ∈ Rn . From X  uoir Y it follows that F (x)G(x ∧ y)F (x ∧ y)G(x) and from the MTP2 property of G it follows that G(x)G(y)G(x ∧ y)G(x ∨ y). Multiplication of these two inequalities yields F (x)G(x ∧ y)G(x)G(y)F (x ∧ y)G(x)G(x ∧ y)G(x ∨ y).

(3.4)

If F (x)G(y) = 0 then the inequality in (3.1) obviously holds. If F (x)G(y) > 0 then G(x∧y) > 0 because (x∧y) y, and G(x) > 0 because X  uoir Y implies that G(x) F (x). Thus we can cancel equal terms in (3.4) and obtain the inequality in (3.1). Now assume that F is MTP2 . Again, fix x, y ∈ Rn . From X  uoir Y it follows that F (x ∨ y)G(y) F (y)G(x ∨ y) and from the MTP2 property of F it follows that F (x)F (y)F (x ∧ y)F (x ∨ y).

(3.5)

72

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

Multiplication of these two inequalities yields F (x ∨ y)G(y)F (x)F (y) F (y)G(x ∨ y)F (x ∧ y)F (x ∨ y).

(3.6)

If F (x)G(y) = 0 then the inequality in (3.1) obviously holds. Thus, suppose that F (x)G(y) > 0 and note, in view of (3.5), that F (x ∨ y)F (y) > 0 if F (y) > 0. Therefore, in order to show that (3.6) implies the inequality in (3.1), it suffices to show that when F is MTP2 and F and G have the same univariate marginals, then G(y) > 0 implies F (y) > 0. Let y = (y1 , y2 , . . . , yn ) be such that G(y) > 0. Since F and G have the same uni(i) variate marginals, it follows that, for each i = 1, 2, . . . , n, there exists y(i) = (y1 , . . . , (i) (i) (i) (i) yi−1 , yi , yi+1 , . . . , yn ) with yj yj , j = i, such that F (y(i) ) > 0. Note that y(1) ∨ y(2) ∨ · · · ∨ y(n) = y and therefore F (y(1) ∨ y(2) ∨ · · · ∨ y(n) ) = F (y). From the MTP2 property of F the following inequalities follow: F (y(1) ∨ (y(2) ∨ · · · ∨ y(n) ))F (y(1) ∧ (y(2) ∨ · · · ∨ y(n) ))  F (y(1) )F (y(2) ∨ · · · ∨ y(n) ), F (y(2) ∨ (y(3) ∨ · · · ∨ y(n) ))F (y(2) ∧ (y(3) ∨ · · · ∨ y(n) ))  F (y(2) )F (y(3) ∨ · · · ∨ y(n) ), .. . (n−2) (n−1) (n) (n−2) (n−1) (n) ∨ (y ∨ y ))F (y ∧ (y ∨ y ))  F (y(n−2) )F (y(n−1) ∨ y(n) ), F (y F (y(n−1) ∨ y(n) )F (y(n−1) ∧ y(n) )  F (y(n−1) )F (y(n) ). Since F (y(n) ) > 0, F (y(n−1) ) > 0, . . . , F (y(1) ) > 0, it is easy to see from the above inequalities that F (y) = F (y(1) ∨ y(2) ∨ · · · ∨ y(n) ) > 0. This ends the proof of part (a). Part (b) is proven similarly.  The orders  slodr and  suoir are quite strong in the sense that if two distribution functions can be compared with respect to any of these orders, then the larger one must have a positive dependence property. This is shown in the next result, and will be used in the sequel. Proposition 3.6. Let (X, Y ) have the distribution function G ∈ 2 (F1 , F2 ).   (a) If G slodr F , for some F ∈ 2 (F1 , F2 ), then LTD(XY ) and LTD(Y X). (b) If G  suoir F , for some F ∈ 2 (F1 , F2 ), then RTI(X Y ) and RTI(Y X). Proof. We prove only (a); the proof of (b) is similar. If G slodr F then F (x1 , x2 )G(y1 , y2 ) F (x1 ∨ y1 , x2 ∨ y2 )G(x1 ∧ y1 , x2 ∧ y2 ), for all x1 , x2 , y1 , y2 . Select above x1 y1 , and let x2 → ∞. Then F1 (x1 )G(y1 , y2 ) F1 (y1 )G(x1 , y2 ),  that is, P{X x Y y} is decreasing in y for all x. In a similar manner it can be shown that P {Y y X x} is decreasing in x for all y.  Some examples of distributions that are ordered with respect to  slodr and  suoir will now be given.

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

73

Example 3.7. From Proposition 2.3 it follows that in any Fréchet class 2 (F1 , F2 ) we have F −  uoir F ⊥ and F −  uoir F + . Furthermore, from Example 2.8 we have that F ⊥  uoir F + in any Fréchet class n (F1 , F2 , . . . , Fn ). Since F + and F ⊥ are MTP2 in n (F1 , F2 , . . . , Fn ), it follows, by Proposition 3.5, that F −  suoir F ⊥ and F −  suoir F + in 2 (F1 , F2 ), and that F ⊥  suoir F + in n (F1 , F2 , . . . , Fn ). Similarly it can be shown that F −  slodr F ⊥ and F −  slodr F + in 2 (F1 , F2 ), and that F ⊥  slodr F + in n (F1 , F2 , . . . , Fn ). Example 2.9 (continued). The survival functions F and G in Example 2.9 are MTP2 (see Example 5.2 in [13]). Therefore, from Proposition 3.5 we obtain that if the set of inequalities (2.17) holds, with the last inequality there being an equality, then F  suoir G. Example 2.10(a) (continued). Let F() be as in Example 2.10(a). It was shown there that if 1 2 then F(1 )  lodr F(2 ) , and if (−1)n 1 (−1)n 2 then F(1 )  uoir F(2 ) . Here we will show that if 2 1 0 then F(1 )  slodr F(2 ) , and if (−1)n 2 (−1)n 1 0 then F(1 )  suoir F(2 ) . By Proposition 3.5, in order to show the former, it suffices to show that if 0 then F() is MTP2 . The MTP2 property is preserved under products, and under strictly decreasing transformations  of the variables (see [15]). Therefore it suffices to show that if  xi ’s, it is 0 then 1 +  ni=1 xi is TP2 in each pair of xi ’s. By the symmetry of the  enough to show that [1 + x1 x2 ] is TP2 in (x1 , x2 ) when  ∈ [0, 1] (here  =  ni=3 xi ). It is easy to check that, indeed, [1 + x1 x2 ][1 + y1 y2 ] [1 + (x1 ∧ y1 )(x2 ∧ y2 )][1 + (x1 ∨ y1 )(x2 ∨ y2 )]. Thus, F(1 )  slodr F(2 ) if 2 1 0. Similarly it can be shown that if (−1)n 2  (−1)n 1 0 then F(1 )  suoir F(2 ) . Note that for choices of 1 and 2 not considered above, the distributions F(1 ) and F(2 ) are not ordered with respect to the orders  slodr and  suoir , although, by Example 2.10(a), they are ordered with respect to the orders  lodr and  uoir . Example 2.11 (continued). Let C be as in Example 2.11. We want to show that if  0 then C (u, v) is TP2 in (u, v) ∈ [0, 1]2 . By the closure properties of TP2 functions, it suffices to show that (1 − uv)−1 is TP2 , that is, that (1 − u1 u2 )−1 (1 − v1 v2 )−1 (1 − (u1 ∧ v1 )(u2 ∧ v2 ))−1 (1 − (u1 ∨ v1 )(u2 ∨ v2 ))−1 . This is easy to verify. It follows from Example 2.11 and from Proposition 3.5 that C is increasing in  ∈ [0, 1] with respect to the order  slodr . Let D  be as in Example 2.11. From the above result, together with Corollary 3.3, we get that D is increasing in  ∈ [0, 1] with respect to the order  suoir . Example 2.13 (continued). Let C be as in Example 2.13. It is easy to prove that C (u, v) is TP2 in u and v (in fact, it suffices to show the TP2 -ness of min{u, v}). Therefore, from Proposition 3.5 we obtain that C  suoir C whenever 0  1. We will now discuss some relationships of the strong orthant ratio orders to other positive dependence orders.

74

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

First we note that the survival function of X in Example 2.15 is MTP2 . It follows from Proposition 3.5 that X and Y in that example satisfy X  suoir Y, and therefore X suoir Y ⇒ X sm Y. Similarly, X  slodr Y ⇒ X sm Y. We have seen in Section 2.4 that the order  TP2 does not imply the orders  uoir and  lodr . Consequently, it follows that (X1 , X2 )  TP2 (Y1 , Y2 ) ⇒ (X1 , X2 )  suoir (Y1 , Y2 ), (X1 , X2 )  TP2 (Y1 , Y2 ) ⇒ (X1 , X2 )  slodr (Y1 , Y2 ). In the next example it is shown that (X1 , X2 )  suoir (Y1 , Y2 ) ⇒ (X1 , X2 )  TP2 (Y1 , Y2 ), (X1 , X2 )  slodr (Y1 , Y2 ) ⇒ (X1 , X2 )  TP2 (Y1 , Y2 ).

Example 3.8. Let (X1 , X2 ) take on each of the values (1, 2), (1, 3), (2, 2), (2, 3), (3, 2), and (3, 3) with probability 1/10, and each of the values (2, 1) and (3, 1) with probability 2/10. Let (Y1 , Y2 ) take on each of the values (2, 1), (2, 3), (3, 1), and (3, 2) with probability 1/10, and each of the values (1, 1), (2, 2), and (3, 3) with probability 2/10. A lengthy computation shows that (X1 , X2 )  suoir (Y1 , Y2 ). However inequality (2.22) fails with x1 = 2, x2 = 3, y1 = 1, y2 = 2. Thus, (X1 , X2 )  suoir (Y1 , Y2 ) ⇒ (X1 , X2 )  TP2 (Y1 , Y2 ). Since, in general, (X1 , X2 )  suoir (Y1 , Y2 ) ⇐⇒ (−X1 , −X2 )  slodr (−Y1 , −Y2 ), and (X1 , X2 ) TP2 (Y1 , Y2 ) ⇐⇒ (−X1 , −X2 )  TP2 (−Y1 , −Y2 ), it follows from the above example that also (X1 , X2 )  slodr (Y1 , Y2 ) ⇒ (X1 , X2 )  TP2 (Y1 , Y2 ). In light of (3.3), it follows from the above arguments that (X1 , X2 )  uoir (Y1 , Y2 ) ⇒ (X1 , X2 ) TP2 (Y1 , Y2 ) and that (X1 , X2 )  lodr (Y1 , Y2 ) ⇒ (X1 , X2 )  TP2 (Y1 , Y2 ). The strong orders  slodr and  suoir (unlike the weak orders  lodr and  uoir ; see (2.28) and (2.29)) imply the LTD and RTI orders of Averous and Dortet-Bernadet [1] under some regularity conditions. This is shown in the next result. Theorem 3.9. Let F and G be in the Fréchet class 2 (F1 , F2 ). Assume that, for every x, the conditional distributions FxL and FxR (see (2.23)) are strictly increasing and continuous on their supports. Then F  slodr G ⇒ F  LTD G, F  suoir G ⇒ F  RTI G. Proof. We prove only the first implication; the proof of the other implication is similar. Averous and Dortet-Bernadet [1] proved that, under the stated regularity conditions, F  LTD G if, and only if, for x x  , and for any y, y  , it holds that F (x, y) G(x, y  ) ⇒ F (x  , y)G(x  , y  ).

(3.7)

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

75

Now assume that F  slodr G. So, for x x  and y  y we have F (x, y)G(x  , y  ) F (x  , y)G(x, y  ).

(3.8)

Note that F  slodr G implies that F  PQD G. So the left hand side inequality in (3.7) can hold only for y  y. If it does hold, then (3.8) implies the inequality on the right-hand side of (3.7).  In light of (2.25) and (2.26) it follows that the implications in Theorem 3.9 are strict. The strong orders  slodr and  suoir (again, unlike the weak orders  lodr and  uoir ; see (2.31) and (2.32)) imply the LTD and RTI orders of Hollander et al. [12]. This is shown in the next result. Theorem 3.10. Let F and G be in the Fréchet class 2 (F1 , F2 ). Then F  slodr G ⇒ F  ∗LTD G, F  suoir G ⇒ F  ∗RTI G. Proof. We prove only the first implication; the proof of the other implication is similar. Fix x x  and any y. The assumption F  slodr G yields F (x, y)G(x  , y)F (x  , y)G(x, y). Therefore F (x, y) F (x  , y) F (x  , y) G(x, y) F (x  , y) −  · − F1 (x) F1 (x  ) F1 (x) G(x  , y) F1 (x  ) % $ F (x  , y) G(x, y) G(x  , y) = . − G(x  , y) F1 (x) F1 (x  ) The assumption F  slodr G also yields F (x  , y) G(x  , y) (since F  PQD G) and G(x,y) G(x  ,y) F1 (x) − F1 (x  ) 0 (by Proposition 3.6(a)). Therefore $ % F (x  , y) G(x, y) G(x  , y) G(x, y) G(x  , y) −  − . G(x  , y) F1 (x) F1 (x  ) F1 (x) F1 (x  ) Combining the two inequalities above we obtain (2.30).  In light of (2.33) and (2.34) it follows that the implications in Theorem 3.10 are strict. A word on applications: We end this section with a word on the usefulness of the strong orthant ratio orders in practice. First, as was mentioned earlier, the orders  slodr and  suoir are useful as a tool to identify random vectors that are ordered with respect to the orders  lodr and  uoir . Thus, by (3.3), they imply all the inequalities in the applications that were discussed in Section 2.5. Furthermore, we have also shown above that the strong orthant ratio orders hold in quite a few examples, and that they imply the LTD and RTI orders of Averous and Dortet-Bernadet [1]. Thus, pairs of random vectors that are comparable with respect to the strong orthant ratio orders (as in the examples earlier in this section), satisfy the inequalities, derived in

76

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

Averous and Dortet-Bernadet [1], with applications in queueing models and in actuarial science. We close this section with an analog of Proposition 2.19. Proposition 3.11. Let (U, V ) be distributed according to the copula C. Then   (a) C  slodr C ⊥ if, and only if, LTD(UV ) and LTD(V U ). (b) C  suoir C ⊥ if, and only if, RTI(U V ) and RTI(V U ). Proof. The above results follow from Proposition 2.19, Proposition 3.5, and the fact that C ⊥ is TP2 . 

4. A comment on an even stronger order One may think of considering the multivariate likelihood ratio order  lr (see [26, Section 4.E]), as a positive dependence order when the compared random vectors have the same marginal distributions. However, such an order is of no interest because it would imply equality in law of the compared random vectors. This follows from the fact that the order  lr implies the ordinary stochastic order  st , and, when two random vectors that are comparable with respect to  st , have the same marginal distributions, they must have the same distribution functions (see Proposition 6 in [24] or Theorem 4 in [2]). An order that is weaker than  lr , and which does not imply the order  st , is described next. Let X = (X1 , X2 , . . . , Xn ) and Y = (Y1 , Y2 . . . . , Yn ) be two random vectors with respective (discrete or continuous) density functions f and g, and respective distribution functions F and G. Suppose that X and Y have the same marginal distributions, and that g(x) f (x)

is increasing on the union of the supports of X and Y.

(4.1)

Condition (4.1), but without the assumption of equal marginal distributions, was studied in Whitt [30]. One may wonder whether (4.1) is a useful condition for identifying the orders  lodr and  uoir . It turns out that under mild regularity conditions, if X = (X1 , X2 ) and Y = (Y1 , Y2 ) satisfy (4.1), and they have the same marginal distributions, then they must be equal in law. Proposition 4.1. If X = (X1 , X2 ) and Y = (Y1 , Y2 ) have a common support that is a lattice, if they satisfy (4.1), and if they have the same univariate marginal distributions, then X =st Y. Proof. Suppose that X and Y satisfy (4.1), and that they have the same univariate marginal distributions. Assume, however, that they do not have the same distribution. Then there exists a point x = (x1 , x2 ) such that g(x) > f (x), and a point s = (s1 , s2 ) such that g(s) < f (s). Since the common support is a lattice, we can, by (4.1), assume that s x. It follows that there exists a point t = (t1 , t2 ) (‘between’ x and s) such that g(y) f (y) for any y t, and g(y) > f (y) for any y > t (here (y1 , y2 ) > (t1 , t2 ) means yi > ti for i = 1, 2).

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

77

Let S denote the common support of X and Y, and let PX and PY denote the corresponding probability measures. Consider the ‘upper quadrant’ A = {y ∈ S : y > t} and the ‘lower quadrant’ B = {y ∈ S : y t}. Since x ∈ A, and g(x) > f (x), it follows from (4.1) that PY {A} > PX {A}; that is, P {X1 > t1 , X2 > t2 } < P {Y1 > t1 , Y2 > t2 }. Since X2 =st Y2 , it follows that P {X1 t1 , X2 > t2 } > P {Y1 t1 , Y2 > t2 }. Since X1 =st Y1 , it further follows that P {X1 t1 , X2 t2 } < P {Y1 t1 , Y2 t2 }; that is, PY {B} > PX {B}. But s ∈ B, and the last inequality contradicts g(s) < f (s).  Thus, in the bivariate case, condition (4.1) is of no interest in the context of positive dependence stochastic orders. We do not know whether an analog of Proposition 4.1 holds for three- (or higher-)dimensional X and Y.

Acknowledgments We thank a referee for useful and encouraging comments on a previous version of this paper.

References [1] J. Averous, J.-L. Dortet-Bernadet, LTD and RTI dependence orderings, Canad. J. Statist. 28 (2000) 151–157. [2] F. Baccelli, A.M. Makowski, Multidimensional stochastic ordering and associated random variables, Oper. Res. 37 (1989) 478–487. [3] R.E. Barlow, F. Proschan, Statistical Theory of Reliability and Life Testing, Probability Models, Holt, Rinehart, and Winston, New York, 1975. [4] H.W. Block, M.-L. Ting, Some concepts of multivariate dependence. Commun. Statist.—Theory Methods A 10 (1981) 749–762. [5] P. Capéraà, A.-L. Fougères, C. Genest, A stochastic ordering based on a decomposition of Kendall’s Tau, ˘ epán (Eds.), Distributions with given Marginals and Moment Problems, Kluwer, Boston, in: V. Bene˘s, J. St˘ 1997, pp. 81–86. [6] A. Colangelo, Some Studies in Positive Dependence, Ph.D. dissertation, Università Bocconi, 2005. [7] A. Colangelo, M. Scarsini, M. Shaked, Some multivariate positive dependence notions, Canad. J. Statist. 2005, to appear. [8] P. Collet, F.J. López, S. Martínez, Order relations of measures when avoiding decreasing sets, Statist. Probab. Lett. 65 (2003) 165–175. [9] M. Dall’Aglio, M. Scarsini, Zonoids, linear dependence, and size-biased distributions on the simplex, Adv. Appl. Probab. 35 (2003) 871–884. [10] M. Denuit, J. Dhaene, C. Le Bailly de Tilleghem, S. Teghem, Measuring the impact of a dependence among insured lifelengths, Belg. Actuarial Bull. 1 (2001) 18–39. [11] N. Ebrahimi, M. Ghosh, Multivariate negative dependence, Commun. Statist.—Theory Methods A 10 (1981) 307–337. [12] M. Hollander, F. Proschan, J. Sconing, Information, censoring, and dependence, in: H.W. Block, A.R. Sampson, T.H. Savits (Eds.), Topics in Statistical Dependence, IMS Lecture Notes—Monograph Series, vol. 16, Hayward, CA, pp. 257–268. [13] T. Hu, B.-E. Khaledi, M. Shaked, Multivariate hazard rate orders, J. Multivariate Anal. 84 (2003) 173–189. [14] H. Joe, Multivariate Models and Dependence Concepts, Chapman & Hall, London, 1997. [15] S. Karlin, Y. Rinott, Classes of orderings of measures and related correlation inequalities. I. Multivariate totally positive distributions, J. Multivariate Anal. 10 (1980) 467–498. [16] G. Kimeldorf, A.R. Sampson, Positive dependence orderings, Ann. Inst. Statist. Math. 39 (1987) 113–128.

78

A. Colangelo et al. / Journal of Multivariate Analysis 97 (2006) 46 – 78

[17] G. Kimeldorf, A.R. Sampson, A framework for positive dependence, Ann. Inst. Statist. Math. 41 (1989) 31–45. [18] S. Kotz, N. Balakrishnan, N.L. Johnson, Continuous Multivariate Distributions, Vol. 1: Models and Applications, Wiley, New York, 2000. [19] A.W. Marshall, I. Olkin, A multivariate exponential distribution, J. Amer. Statist. Assoc. 62 (1967) 30–44. [20] M.H. Metry, A.R. Sampson, A family of partial orderings for positive dependence among fixed marginal bivariate distributions, in: G. Dall’Aglio, S. Kotz, G. Salinetti (Eds.), Advances in Probability Distributions with Given Marginals (Rome, 1990), Kluwer, Dordrecht, 1991, pp. 129–138. [21] A. Müller, M. Scarsini, Archimedean copulae and positive dependence, J. Multivariate Anal. 2005, to appear, doi:10.1016/j.jmva.2004.04.003. [22] A. Müller, D. Stoyan, Comparison Methods for Stochastic Models and Risks, Wiley, New York, 2002. [23] R.B. Nelsen, An Introduction to Copulas, Springer, New York, 1999. [24] L. Rüschendorf, Stochastically ordered distributions and monotonicity of the OC-function of sequential probability ratio tests, Math. Oper. Statist. Ser. Statist. 12 (1981) 327–338. [25] B. Schweizer, A. Sklar, Probabilistic Metric Spaces, North-Holland, New York, 1983. [26] M. Shaked, J.G. Shanthikumar, Stochastic Orders and Their Applications, Academic Press, Boston, 1994. [27] M. Shaked, J.G. Shanthikumar, Supermodular stochastic orders and positive dependence of random vectors, J. Multivariate Anal. 61 (1997) 86–101. [28] M. Sklar, Fonctions de répartition à n dimensions et leurs marges, Publ. l’Inst. Statist. Univ. Paris 8 (1959) 229–231. [29] R. Szekli, R.L. Disney, S. Hur, MR/GI/1 queues with positively correlated arrival stream, J. Appl. Probab. 31 (1994) 497–514. [30] W. Whitt, Multivariate monotone likelihood ratio and uniform conditional stochastic order, J. Appl. Probab. 19 (1982) 695–701.