Stochastic comparisons of weighted sums of arrangement increasing random variables

Stochastic comparisons of weighted sums of arrangement increasing random variables

Statistics and Probability Letters xx (xxxx) xxx–xxx Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: ...

397KB Sizes 0 Downloads 108 Views

Statistics and Probability Letters xx (xxxx) xxx–xxx

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

Stochastic comparisons of weighted sums of arrangement increasing random variables Q1

Xiaoqing Pan a,∗ , Min Yuan b , Subhash C. Kochar c a

Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China b

Department of Probability and Statistics, School of Mathematics, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China c

Fariborz Maseeh Department of Mathematics and Statistics, Portland State University, Portland, OR 97006, USA

article

info

Article history: Received 22 August 2014 Received in revised form 27 March 2015 Accepted 30 March 2015 Available online xxxx

abstract Assuming that the joint density of random variables X1 , X2 , . . . , Xn is arrangement increasing (AI), we obtain some stochastic comparison results on weighted sums of Xi ’s under some additional conditions. An application to optimal capital allocation is also given. © 2015 Published by Elsevier B.V.

MSC: 60E15 62N05 62G30 Keywords: Increasing convex order Usual stochastic order Majorization Supermodular (Submodular) Arrangement increasing Joint likelihood ratio ordering Log-concavity

Q2

1. Introduction During the past few decades, linear combinations of random variables have been extensively studied in statistics, operations research, reliability theory, actuarial science and other fields. Most of the related work restricts to some specific distributions such as Exponential, Weibull, Gamma and Uniform, among others. Karlin and Rinott (1983) and Yu (2011) studied the stochastic properties of linear combinations of independent and identically distributed (i.i.d.) random variables without putting any distributional assumptions. Later on, Xu and Hu (2011, 2012), Pan et al. (2013) and Mao et al. (2013) weakened the i.i.d. assumption to independent, yet possibly non-identically distributed (i.ni.d), random variables. It should be noted that most of the related work assumes that the random variables are mutually independent. Recently, some work has appeared on stochastic comparisons of dependent random variables. Xu and Hu (2012) discussed stochastic comparisons of comonotonic random variables with applications to capital allocations. You and Li (2014) focused on linear combinations of random variables with Archimedean dependence structure. Cai and Wei (2014) proposed



Corresponding author. E-mail addresses: [email protected] (X. Pan), [email protected] (M. Yuan), [email protected] (S.C. Kochar).

http://dx.doi.org/10.1016/j.spl.2015.03.012 0167-7152/© 2015 Published by Elsevier B.V.

1

2 3 4 5 6 7 8 9 10 11

2

1 2 3 4 5

6

X. Pan et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

several new notions of dependence to measure dependence between risks. They proved that characterizations of these notions are related to properties of arrangement increasing (AI) functions (to be defined in Section 2). Motivated by the importance of AI functions, we study the problem of stochastic comparisons of weighted sums of AI random variables in this paper. We say X1 , . . . , Xn are AI random variables if their joint density f (x) is an AI function. Ma (2000) proved the following result for AI random variables X1 , . . . , Xn : a ≽m b H⇒

n 

a(i) Xi ≥icx

i=1 7 8 9 10

n 

b(i) Xi ,

∀ a, b ∈ ℜn ,

where a(1) ≤ a(2) ≤ · · · ≤ a(n) is the increasing arrangement of the components of the vector a = (a1 , a2 , . . . , an ). The formal definitions of stochastic orders and majorization orders are given in Section 2. Let X1 , X2 , . . . , Xn be independent random variables satisfying X1 ≥hr X2 ≥hr · · · ≥hr Xn ,

11

and let φ(x, a) be a convex function which is increasing in x for each a. Mao et al. (2013) proved that

12

(i) if φ is submodular, then

13

a ≽m b H⇒

n n       φ Xi , a(i) ≥icx φ Xi , b(i) ; i=1

14

15

(1.1)

i =1

(1.2)

i=1

(ii) if φ is supermodular, then a ≽m b H⇒

n n       φ Xi , a(n−i+1) ≥icx φ Xi , b(n−i+1) . i=1

(1.3)

i=1

21

The function φ in (1.2) and (1.3) could be interpreted as some appropriate distance measures in actuarial science. For more details, please refer to Xu and Hu (2012). In this paper we further study the problem of stochastic comparisons of linear combinations of AI random variables not only for increasing convex ordering, but also for the usual stochastic ordering. The rest of this paper is organized as follows. Some preliminaries are given in Section 2. The main results are presented in Section 3. An application to optimal capital allocation is discussed in Section 4.

22

2. Preliminaries

16 17 18 19 20

23 24 25

26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

41 42 43 44 45

In this section, we give definitions of some stochastic orders, majorization orders and supermodular [submodular] functions. Throughout the paper, the terms ‘increasing’ and ‘decreasing’ are used to mean ‘non-decreasing’ and ‘non-increasing’, respectively. Definition 2.1 (Stochastic Orders). Let X and Y be two random variables with probability (mass) density functions f and g; and survival functions F and G respectively. We say that X is smaller than Y (1) in the usual stochastic order, denoted by X ≤st Y , if F (t ) ≤ G(t ) for all t or, equivalently, if E[h(X )] ≤ E[h(Y )] for all increasing functions h; (2) in the hazard rate order, denoted by X ≤hr Y , if G(t )/F (t ) is increasing in t for which the ratio is well defined; (3) in the likelihood ratio order, denoted by X ≤lr Y , if g (t )/f (t ) is increasing in t for which the ratio is well defined; (4) in the increasing convex order, denoted by X ≤icx Y , if E[h(X )] ≤ E[h(Y )] for all increasing convex functions h for which the expectations exist. The relationships among these orders are shown in the following diagram (see Shaked and Shanthikumar, 2007 and Müller and Stoyan, 2002): X ≤lr Y H⇒ X ≤hr Y H⇒ X ≤st Y H⇒ X ≤icx Y . Shanthikumar and Yao (1991) considered the problem of extending the above concepts to compare the components of dependent random variables. In this paper we will focus only on extension of likelihood ratio ordering to the case of dependent random variables. Let (X , Y ) be a continuous bivariate random vector on [0, ∞]2 with joint density (or mass) function f (x, y). Definition 2.2. For a bivariate random variable (X , Y ), X is said to be smaller than Y according to joint likelihood ordering, denoted by X ≤ℓr :j Y , if and only if E [Ψ (X , Y )] ≥ E [Ψ (Y , X )],

Ψ ∈ Gℓr ,

where

Gℓr : {Ψ : Ψ (x, y) ≥ Ψ (y, x), x ≤ y}.

X. Pan et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

3

It can be seen that

1

X ≤ℓr :j Y ⇔ f ∈ Gℓr , where f (·, ·) denotes the joint density of (X , Y ). As pointed out by Shanthikumar and Yao (1991), joint likelihood ratio ordering between the components of a bivariate random vector may not imply likelihood ratio ordering between their marginal distributions unless the random variables are independent, but it does imply stochastic ordering between them, that is, X ≤ℓr :j Y ⇒ X ≤st Y .

3 4 5 6 7

A bivariate function Ψ ∈ Gℓr is called arrangement increasing (AI). Hollander et al. (1977) have studied many interesting properties of such functions, though, apparently, they did not relate it to the notion of likelihood ratio ordering. We can extend this concept to compare more than two random variables in the following way. Let π = (π (1), . . . , π(n)) be any permutation of {1, . . . , n} and let π (x) = (xπ (1) , . . . , xπ (n) ). For any 1 ≤ i ̸= j ≤ n, we denote πij = πij (1), . . . , πij (n) with πij (i) = j, πij (j) = i and πij (k) = k for k ̸= i, j. Definition 2.3 (AI Function). A real-valued function g (x) defined on ℜn is said to be an arrangement increasing (AI) function if

(xi − xj ) g (x) − g (πij (x)) ≤ 0, 

2



8 9 10 11 12

13 14 15

for any pair (i, j) such that 1 ≤ i < j ≤ n.

16

We say X1 , . . . , Xn are AI random variables if their joint density f (x) is an AI function.

17

Definition 2.4. We say that a function h(x, y) is Totally Positive of order 2 (TP 2 ) if h(x, y) ≥ 0 and

18

 h(x1 , y1 )  h(x2 , y1 )

 h(x1 , y2 ) ≥ 0, h(x2 , y2 )

(2.1)

whenever x1 < x2 , y1 < y2 .

20

Hollander et al. (1977) and Marshall et al. (2011) gave many examples of AI random variables. The following vectors of random variables X = (X1 , . . . , Xn ) are arrangement increasing. (1) X1 , . . . , Xn are identically independent distributed random variables. (2) X1 , . . . , Xn are exchangeable random variables. (3) SupposeX1 , . . . , Xn are independent random variables with density functions h(λi , xi ), i = 1, . . . , n, respectively. Then f (x) = h(λi , xi ) is arrangement increasing if and only if h(λ, x) is TP 2 in λ and x. Hollander et al. (1977) have shown that the following multivariate distributions are AI: Multinomial, Negative multinomial, Multivariate hypergeometric, Dirichlet, Inverted Dirichlet, Negative multivariate hypergeometric, Dirichlet compound negative multinomial, Multivariate logarithmic series distribution, Multivariate F distribution, Multivariate Pareto distribution, Multivariate normal distribution with common variance and common covariance. Majorization defines a partial ordering of the diversity of the components of vectors. For extensive and comprehensive details on the theory of majorization order and their applications, please refer to Marshall et al. (2011). n Definition 2.5 (Majorization, nSchur-Concave n (Schur-Convex) and Log-Concave). For vectors x, y ∈ ℜ , x is said to be majorized by y, denoted by x ≼m y, if i=1 x(i) = y and i=1 (i) j 

x(i) ≥

i =1

j 

y(i)

19

for j = 1, . . . , n − 1.

21 22 23 24 25 26 27 28 29 30 31 32

33 34

35

i=1

A real-valued function φ defined on a set A ⊆ ℜn is said to be Schur-concave [Schur-convex] on A if, for any x, y ∈ A, x ≽m y H⇒ φ(x) ≤ [≥] φ(y);

37

and φ is said to be log-concave on A ⊂ ℜn if A is a convex set and, for any x, y ∈ A and α ∈ [0, 1],

38

φ(α x + (1 − α)y) ≥ [φ(x)]α [φ(y)]1−α .

39

Definition 2.6 (Supermodular (Submodular) Function). A real-valued function ϕ : ℜn → ℜ is said to be supermodular [submodular] if

ϕ(x ∨ y) + ϕ(x ∧ y) ≥ [≤] ϕ(x) + ϕ(y) for all x, y ∈ ℜ . n

40 41 42

Here, ∨ and ∧ denote the componentwise maximum and the componentwise minimum, respectively. If ϕ : ℜ → ℜ has second partial derivatives, then it is supermodular [submodular] if and only if n

∂ ϕ(x) ≥ [≤] 0 for all i ̸= j and x ∈ ℜn . ∂ xi ∂ xj

36

43 44

2

45

4

1 2 3 4 5 6 7

8

9 10

X. Pan et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

Marshall et al. (2011) gave several examples of supermodular [submodular] functions. Below we give some examples of linear functions which are supermodular [submodular]. (1) ϕ(a, x) = i=1 φ1 (ai )φ2 (xi ): if φ1 and φ2 are increasing, then ϕ(a, x) is submodular; and if φ1 is decreasing and φ2 is increasing, then ϕ(a, x) is supermodular. n (2) ϕ(a, x) = i=1 φ(xi − ai ): if φ is convex, then ϕ(a, x) is submodular; and if φ is concave, then ϕ(a, x) is supermodular. n (3) ϕ(a, x) = in=1 max{ai , xi } is submodular.  2 (4) ϕ(a, x) = i=1 sup c1 ai + c2 xi : (c1 , c2 ) ∈ C is submodular, where C ⊆ ℜ .

n

3. Main results In this section, we study stochastic comparisons of weighted sums of the form i=1 φ(Xi , ai ) where X1 , . . . , Xn are random variables with joint density function f (x). In what follows, we make the following assumptions:

n

13

(A1) f (x) is log-concave, (A2) f (x) is arrangement increasing, (A3) φ : ℜ2 → ℜ is a convex function.

14

We consider both usual stochastic order as well as increasing convex order for comparison purposes.

15

3.1. Usual stochastic ordering

11 12

16

17

Before we give the main result, we list several lemmas, which will be used in the sequel. Lemma 3.1 (Prékopa, 1973; Eaton, 1982). Suppose that h : ℜm × ℜk → ℜ+ is a log-concave function and that g (x) =

18

 ℜk

h(x, z)dz

19

is finite for each x ∈ ℜm . Then g is log-concave on ℜm .

20

Lemma 3.2 (Pan et al., 2013). If g : ℜ2 → ℜ+ is log-concave and −g is AI, i.e. g (x(2) , x(1) ) ≥ g (x(1) , x(2) ) for all (x1 , x2 ) ∈ ℜ2 ,

21 22 23

24

(3.1)

then

(x1 , x2 ) ≼m (y1 , y2 ) H⇒ g (x(1) , x(2) ) ≥ g (y(1) , y(2) ). We list Theorem 23 in Karlin and Rinott (1983) as a lemma, and we give a new proof as follows.

26

Lemma 3.3 (Karlin and Rinott, 1983). Let X ∈ ℜn have a log-concave density and let φ(x, a) be convex in (x, a) ∈ ℜn+m . Then g (a) = P (φ(X, a) ≤ t ) is log-concave on ℜm .

27

Proof. If we denote

25

28 29 30

31

32 33 34 35

36 37

38

A = (x, a) ∈ ℜn+m : φ(x, a) ≤ t ,





then A is a convex set due to the convexity of φ . Thus, IA is log-concave. If f (x) denotes the joint density function of X, then we have g (a) =

 ℜn

f (x) IA dx.

Since f (x) is log-concave, f (x)IA is log-concave. By Lemma 3.1, g (a) is log-concave.

Lemma 3.4. Let X ∈ ℜn have a log-concave density and let φ(x, a) be concave in (x, a) ∈ ℜn+m . Then g (a) = P (φ(X, a) ≥ t ) is log-concave on ℜm . Theorem 3.5. Under the assumptions (A1), (A2) and (A3), (i) if φ is supermodular, then, for any a, b ∈ ℜn , a ≽m b H⇒

n n       φ Xi , a(i) ≥st φ Xi , b(i) i=1

39

40



From Lemma 3.3, we have

i =1

(ii) if φ is submodular, then, for any a, b ∈ ℜn , a ≽m b H⇒

n n       φ Xn−i+1 , a(i) ≥st φ Xn−i+1 , b(i) . i=1

i =1

X. Pan et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

5

Proof. Since the proof of part (ii) is quite similar to part (i), we only prove part (i). By the nature of majorization, we only need to prove that

φ(X1 , a(1) ) + φ(X2 , a(2) ) +

n 

φ(Xi , ci ) ≥st φ(X1 , b(1) ) + φ(X2 , b(2) ) +

i =3

n 

φ(Xi , ci )

1 2

3

i=3

holds, for all

4

(a(1) , a(2) , c3 , . . . , cn ) ≽m (b(1) , b(2) , c3 , . . . , cn ),

5

with (a(1) , a(2) ) ≽m (b(1) , b(2) ). For any fixed t, we denote

6

g (a1 , a2 ) = P (ϕ(X, a) ≤ t )

7

with

8

n

ϕ(x, a) = φ(x1 , a1 ) + φ(x2 , a2 ) +



φ(xi , ci ).

9

i=3

Since φ is convex, ϕ(x, a) is convex on ℜ2n . By Lemma 3.3, g (a1 , a2 ) is log-concave. From Lemma 3.2, it is sufficient to prove (3.1). Hence, we only need to prove

ϕ (X, a21 ) ≤st ϕ (X, a12 )

(3.2)

where a12 = (a(1) , a(2) ) and a21 = (a(2) , a(1) ). It is sufficient to prove that, for all increasing functions h, E [h (ϕ(X, a12 ))] ≥ E [h (ϕ(X, a21 ))] .

10 11

12

13

14

Now,

15

E [h (ϕ(X, a12 ))] − E [h (ϕ(X, a21 ))]

 =

 ···

 + ··· x1 ≤x2

 −





 ···





  + ···

 ···

h (ϕ(x, a12 )) f (x)dx1 . . . dxn

17

x1 ≥x2

x1 ≤x2

=

16

h (ϕ(x, a21 )) f (x)dx1 . . . dxn

18

x1 ≥x2

[h (ϕ (x, a12 )) − h (ϕ (x, a21 ))] [f (x) − f (π 12 (x))] dx1 . . . dxn .

19

x1 ≤x2

Since f (x) is AI, it follows that f (x) − f (π 12 (x)) ≥ 0,

20

∀ x1 ≤ x2 .

21

Meanwhile, since φ is supermodular, for all x1 ≤ x2 and a(1) ≤ a(2) , we have

22

φ(x1 , a(1) ) + φ(x2 , a(2) ) ≥ φ(x1 , a(2) ) + φ(x2 , a(1) ).

23

Thus,

24

φ(x1 , a(1) ) + φ(x2 , a(2) ) +

n 

φ(xi , ci ) ≥ φ(x1 , a(2) ) + φ(x2 , a(1) ) +

i=3

n 

φ(xi , ci ).

Since h is increasing, we have

26

h(ϕ(x, a12 )) − h(ϕ(x, a21 )) ≥ 0.

27

Therefore, (3.2) holds and the desired result follows.



Since exchangeable random variables are arrangement increasing, the following results follow immediately from this theorem. Corollary 3.6. Let X1 , . . . , Xn be exchangeable random variables satisfying assumption (A1). If φ is a convex function on ℜ, then, for any a, b ∈ ℜn , a ≽m b H⇒

n  i=1

φ(Xi , ai ) ≥st

25

i=3

n  i=1

φ(Xi , bi ).

28

29 30

31 32

33

6

1 2

3

X. Pan et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

Proposition 3.7. Under the assumptions (A1), (A2) and (A3), (i) if φ is supermodular, then, for any a, b ∈ ℜn , a ≽m b H⇒

n n     φ (Xi , bi ) φ Xi , a(i) ≥st i =1

i =1 4

5

(ii) if φ is submodular, then, for any a, b ∈ ℜn , a ≽m b H⇒

n n     φ (Xn−i+1 , bi ) . φ Xn−i+1 , a(i) ≥st i =1

i =1 6 7 8

Proof. The proof of (ii) is quite similar to that of (i), so we only prove (i). By the nature of majorization, we only need to prove that for all

(a(1) , a(2) , c3 , . . . , cn ) ≽m (b(1) , b(2) , c3 , . . . , cn ),

9

with (a(1) , a(2) ) ≽m (b(1) , b(2) ), we have

10

φ(X1 , a(1) ) + φ(X2 , a(2) ) +

n 

φ(Xi , ci ) ≥st φ(X1 , b1 ) + φ(X2 , b2 ) +

i =3 11

12

n 

From (3.2), we have

φ(X1 , b(1) ) + φ(X2 , b(2) ) +

n 

φ(Xi , ci ) ≥st φ(X1 , b(2) ) + φ(X2 , b(1) ) +

i=3

Meanwhile, from Theorem 3.5, we have

14

φ(X1 , a(1) ) + φ(X2 , a(2) ) +

n 

φ(Xi , ci ) ≥st φ(X1 , b(1) ) + φ(X2 , b(2) ) +

i =3

16

φ(X1 , a(1) ) + φ(X2 , a(2) ) +

n 

φ(Xi , ci ) ≥st φ(X1 , b(2) ) + φ(X2 , b(1) ) +

18

Corollary 3.8. Under the assumptions (A1), (A2) and (A3), for any a, b ∈ ℜn , a ≽m b H⇒

n 

φ(Xn−i+1 − a(i) ) ≥st

23

24

n 

26

28

φ(Xi , ci ).

(3.5)



φ(Xn−i+1 − bi ).

Theorem 3.9. Let φ be a concave function on ℜ. Then under the assumptions (A1) and (A2), (i) if φ is supermodular, then, for any a, b ∈ ℜn , a ≽m b H⇒

n n     φ Xn−i+1 , a(i) ≥st φ (Xn−i+1 , bi ) i =1

(ii) if φ is submodular, then, for any a, b ∈ ℜ , n

a ≽m b H⇒

n n     φ Xi , a(i) ≥st φ (Xi , bi ) . i =1

Corollary 3.10. Let φ be a concave function on ℜ. Under the assumptions (A1) and (A2), for any a, b ∈ ℜn , a ≽m b H⇒

n  i=1

29

n 

If the function φ is concave, we get a similar result in the same way.

i =1

27

(3.4)

Proof. It is easy to prove that φ(x − a) is convex and submodular on ℜ2 . By Proposition 3.7, the result follows.

i =1 25

φ(Xi , ci ).

i=1

i=1

22

n 

i=3

From (3.4) and (3.5), we get the desired result.

21

(3.3)

Combining (3.3) and (3.4), we have

17

20

φ(Xi , ci ).

i=3

i =3

19

n  i =3

13

15

φ(Xi , ci ).

i=3

φ(Xi − a(i) ) ≥st

n 

φ(Xi − bi ).

i=1

Finally, we give an example where the assumptions (A1) and (A2) are satisfied.



X. Pan et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

7

Example 3.11. Let Y = (Y1 , . . . , Yn ) be an n-dimensional normal variable with mean vector µ = (µ1 , . . . , µn ) satisfying µ1 ≤ µ2 ≤ · · · ≤ µn , and Cov(Yi , Yj ) = ρ + (1 − ρ)δij σ ,

∀i, j ,

2





2 3

where ρ > − n−1 and δij is Kronecker delta, that is δii = 1 and δij = 0, ∀ i ̸= j. Hollander et al. (1977) proved that the joint density function of Y is AI. It is easy to see that the joint density function of Y is log-concave. Thus, the assumptions (A1) and (A2) hold. 1

3.2. Increasing convex ordering

4 5 6

7

So far, we have obtained all the results under the assumption (A1) that the joint density function is log-concave. However, there exist cases where the joint density functions are not log-concave, even if the marginal density functions are logconcave (cf. You and Li, 2014). An (1998) remarked that if X has a log-concave density, then its density has at most an exponential tail, i.e., f (x) = O (exp(−λx)) ,

1

λ > 0, x → ∞.

9 10 11 12

Thus, all the power moments E|X |γ , γ > 0, of the random variable X exist. In this section, we prove the following theorem without the assumption (A1). Theorem 3.12. Under the assumptions (A2) and (A3),

13 14

15

(i) if φ is supermodular, then, for any a, b ∈ ℜn ,

16

n n       a ≽m b H⇒ φ Xi , a(i) ≥icx φ Xi , b(i) i=1

8

17

i=1

(ii) if φ is submodular, then, for any a, b ∈ ℜn ,

18

n n       a ≽m b H⇒ φ Xn−i+1 , a(i) ≥icx φ Xn−i+1 , b(i) . i=1

19

i=1

Proof. We only prove part (i) as the proof of part (ii) follows on the same lines. By the nature of majorization, we only need to prove that for all

(a(1) , a(2) , c3 , . . . , cn ) ≽m (b(1) , b(2) , c3 , . . . , cn ),

21 22

with (a(1) , a(2) ) ≽m (b(1) , b(2) ), we have

23

n

φ(X1 , a(1) ) + φ(X2 , a(2) ) +

20

n



φ(Xi , ci ) ≥icx φ(X1 , b(1) ) + φ(X2 , b(2) ) +

i =3



φ(Xi , ci ).

24

i=3

We denote

25

ϕ(x, a) = φ(x1 , a1 ) + φ(x2 , a2 ) +

n 

φ(xi , ci ),

26

i=3

a12 = (a(1) , a(2) ) and a21 = (a(2) , a(1) ). It is sufficient to prove that for all increasing convex functions h, E[h(ϕ(X, a12 ))] ≥ E[h(ϕ(X, b12 ))] holds. Now,

27 28 29

E[h(ϕ(X, a12 ))] − E[h(ϕ(X, b12 ))]

 =

 ···

 + ··· x1 ≤x2

 − 





 ···

h (ϕ(x, a12 )) f (x)dx1 . . . dxn



h (ϕ(x, b12 )) f (x)dx1 . . . dxn

x1 ≤x2

32

x1 ≥x2

 [h (ϕ (x, a12 )) − h (ϕ (x, b12 ))] f (x)dx1 . . . dxn ··· x1 ≤x2   [h (ϕ (x, a21 )) − h (ϕ (x, b21 ))] f (π 12 (x))dx1 . . . dxn + ··· x ≤x   1 2 [h (ϕ (x, a12 )) + h (ϕ (x, a21 )) − h (ϕ (x, b12 )) − h (ϕ (x, b21 ))] f (π 12 (x))dx1 . . . dxn . ≥ ···

=

31

x1 ≥x2

  + ··· x1 ≤x2

30

33

34

35

8

1

X. Pan et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

The last inequality follows from the fact that f (x) is AI, i.e.,

2

f (x) − f (π 12 (x)) ≥ 0,

3

Thus, it is sufficient to prove h (ϕ (x, a12 )) + h (ϕ (x, a21 )) − h (ϕ (x, b12 )) − h (ϕ (x, b21 )) ≥ 0.

4 5

ϕ(x, a12 ) − ϕ(x, b12 ) = φ(x1 , a(1) ) + φ(x2 , a(2) ) − φ(x1 , b(1) ) − φ(x2 , b(2) )   = φ(x1 , a(1) ) + φ(x1 , a(2) ) − φ(x1 , b(1) ) − φ(x1 , b(2) )   + φ(x2 , a(2) ) − φ(x1 , a(2) ) + φ(x1 , b(2) ) − φ(x2 , b(2) ) ≥ 0, ϕ(x, a12 ) − ϕ(x, b21 ) = φ(x1 , a(1) ) + φ(x2 , a(2) ) − φ(x2 , b(1) ) − φ(x1 , b(2) )   = φ(x1 , a(1) ) + φ(x1 , a(2) ) − φ(x1 , b(1) ) − φ(x1 , b(2) )   + φ(x2 , a(2) ) − φ(x1 , a(2) ) + φ(x1 , b(1) ) − φ(x2 , b(1) ) ≥ 0

7 8 9 10 11

13 14 15 16 17 18 19

(3.6)

Since φ is convex and supermodular, for a(1) ≤ b(1) ≤ b(2) ≤ a(2) and x1 ≤ x2 , we have

6

12

∀ x1 ≤ x2 .

and

ϕ(x, a12 ) + ϕ(x, a21 ) − ϕ(x, b12 ) − ϕ(x, b21 ) = φ(x1 , a(1) ) + φ(x1 , a(2) ) − φ(x1 , b(1) ) − φ(x1 , b(2) ) + φ(x2 , a(2) ) + φ(x2 , a(1) ) − φ(x2 , b(2) ) − φ(x2 , b(1) ) ≥ 0. Thus, for any increasing convex function h, if ϕ(x, a21 ) ≥ ϕ(x, b21 ), then h(ϕ(x, a21 )) ≥ h(ϕ(x, b21 )) and h(ϕ(x, a12 )) ≥ h(ϕ(x, b12 )), which implies (3.6). Otherwise, if ϕ(x, a21 ) ≤ ϕ(x, b21 ), we have

(ϕ(x, a21 ), ϕ(x, a12 )) ≽m (ϕ(x, b21 ), ϕ(x, a21 ) − (ϕ(x, b21 ) − ϕ(x, a12 ))). Therefore, h(ϕ(x, a12 )) + h(ϕ(x, a21 )) ≥ h(ϕ(x, b21 )) + h(ϕ(x, a21 ) − (ϕ(x, b21 ) − ϕ(x, a12 )))

≥ h(ϕ(x, b21 )) + h(ϕ(x, b12 )),

20 21

where the first inequality is due to the convexity of h. Therefore, (3.6) holds and the desired result follows.

22

4. An application to optimal capital allocation

23 24 25

26



In this section, we outline an application of our main results. Let X1 , . . . , Xn be n risks in a portfolio. Assume that a company wishes to allocate the total capital p = p1 + · · · + pn to the corresponding risks. As defined in Xu and Hu (2012), the loss function L(p) =

n 

φ(Xi − pi ),

p ∈ A = {p ∈ ℜn+ : p1 + · · · + pn = p}

i=1 27 28 29 30

31 32 33 34 35 36

37

is a reasonable criterion to set the capital amount pi to Xi , where φ is convex. A good capital allocation strategy is to make the loss function L(p) as small as possible in some sense. Besides, the different capital allocation strategies affect the general loss function via stochastic comparisons. Therefore, it is meaningful for us to find the best capital allocation strategy if it exists via the methods in Section 3. Theorem 4.1. If the joint density function of X1 , . . . , Xn satisfies assumptions (A1) and (A2) of Section 3, and if p∗ = (p∗1 , . . . , p∗n ) is the solution to the best capital allocation strategy, then, we have p∗1 ≤ p∗2 ≤ · · · ≤ p∗n . Proof. Let p = (p1 , p2 , p3 , . . . , pn ) be any admissible allocation, and let pˆ = (p2 , p1 , p3 , . . . , pn ). Without loss of generality, we assume p1 ≤ p2 . By the nature of majorization, we only need to prove that P L(ˆp) ≥ t ≥ P (L(p) ≥ t ) ,





∀ t.

That means

φ(X1 − p2 ) + φ(X2 − p1 ) +

n 

φ(Xi − pi ) ≥st φ(X1 − p1 ) + φ(X2 − p2 ) +

i=3 38 39 40

n 

φ(Xi − pi ).

i=3

Since the usual stochastic order is closed under convolution, we only need to prove

φ(X1 − p2 ) + φ(X2 − p1 ) ≥st φ(X1 − p1 ) + φ(X2 − p2 ). Under assumptions (A1) and (A2), (4.1) holds due to Corollary 3.8. Therefore, the desired conclusion follows.

(4.1) 

X. Pan et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

9

Acknowledgments The authors would like to sincerely thank the anonymous referee for the constructive comments and suggestions which led to an improvement of the paper. Besides, the authors are grateful to Professor Taizhong Hu and Professor Maochao Xu for their comments and contributions to this paper. The first author is supported by the National Natural Science Foundation of China (No. 11401558), the Fundamental Research Funds for the Central Universities (No. WK2040160010) and China Postdoctoral Science Foundation (No. 2014M561823). The second author is supported by the National Natural Science Foundation of China (Nos. 11201452 and 11271346). References An, M.Y., 1998. Logconcavity versus logconvexity: a complete characterization. J. Econom. Theory 80, 350–369. Cai, J., Wei, W., 2014. Some new notions of dependence with applications in optimal allocation problems. Insurance Math. Econom. 55, 200–209. Eaton, M.L., 1982. A review of selected topics in multivariate probability inequalities. Ann. Statist. 10, 11–43. Hollander, M., Proschan, F., Sethuraman, J., 1977. Functions decreasing in transposition and their applications in ranking problems. Ann. Statist. 5, 722–733. Karlin, S., Rinott, Y., 1983. Comparison of measures, multivariate majorization, and applications to statistics. In: Karlin, S., et al. (Eds.), Studies in Econometrics, Time Series, and Multivariate Statistics. Academic Press, New York, pp. 465–489. Ma, C., 2000. Convex orders for linear combinations of random variables. J. Statist. Plann. Inference 84, 11–25. Mao, T., Pan, X., Hu, T., 2013. On orderings between weighted sums of random variables. Probab. Engrg. Inform. Sci. 27, 85–97. Marshall, A.W., Olkin, I., Arnold, B.C., 2011. Inequalities: Theory of Majorization and its Applications. Springer, New York. Müller, A., Stoyan, D., 2002. Comparison Methods for Stochastic Models and Risks. John Wiley & Sons, Ltd., West Sussex. Pan, X., Xu, M., Hu, T., 2013. Some inequalities of linear combinations of independent random variables: II. Bernoulli 19 (5A), 1776–1789. Prékopa, A., 1973. On logarithmic concave measures and functions. Acta Sci. Math. (Szeged) 34, 335–343. Shaked, M., Shanthikumar, J.G., 2007. Stochastic Orders. Springer, New York. Shanthikumar, J.G., Yao, D.D., 1991. Bivariate characterization of some stochastic order relations. Adv. Appl. Probab. 23, 642–659. Xu, M., Hu, T., 2011. Some inequalities of linear combinations of independent random variables: I. J. Appl. Probab. 48, 1179–1188. Xu, M., Hu, T., 2012. Stochastic comparisons of capital allocations with applications. Insurance Math. Econom. 50, 293–298. You, Y., Li, X., 2014. Optimal capital allocations to interdependent actuarial risks. Insurance Math. Econom. 57, 104–113. Yu, Y., 2011. Some stochastic inequalities for weighted sums. Bernoulli 17, 1044–1053.

1

2 3 4

Q3

5 6 7 8

9

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26