Multivariate countermonotonicity and the minimal copulas

Multivariate countermonotonicity and the minimal copulas

Accepted Manuscript Multivariate countermonotonicity and the minimal copulas Woojoo Lee, Ka Chun Cheung, Jae Youn Ahn PII: DOI: Reference: S0377-0427...

282KB Sizes 1 Downloads 62 Views

Accepted Manuscript Multivariate countermonotonicity and the minimal copulas Woojoo Lee, Ka Chun Cheung, Jae Youn Ahn PII: DOI: Reference:

S0377-0427(16)30643-4 http://dx.doi.org/10.1016/j.cam.2016.12.032 CAM 10957

To appear in:

Journal of Computational and Applied Mathematics

Received date: 4 February 2016 Revised date: 7 November 2016 Please cite this article as: W. Lee, K.C. Cheung, J.Y. Ahn, Multivariate countermonotonicity and the minimal copulas, Journal of Computational and Applied Mathematics (2016), http://dx.doi.org/10.1016/j.cam.2016.12.032 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Manuscript Click here to view linked References

Multivariate Countermonotonicity and the Minimal Copulas Woojoo Leea , Ka Chun Cheungb , Jae Youn Ahnc,∗ a

Department of Statistics, Inha University, 235 Yonghyun-Dong, Nam-Gu, Incheon 402-751, Korea. Department of Statistics and Actuarial Science, The University of Hong Kong, Pokfulam Road, Hong Kong. c Department of Statistics, Ewha Womans University, 11-1 Daehyun-Dong, Seodaemun-Gu, Seoul 120-750, Korea. b

Abstract Fr´echet-Hoeffding upper and lower bounds play an important role in various bivariate optimization problems because they are the maximum and minimum of bivariate copulas in concordance order, respectively. However, while the Fr´echet-Hoeffding upper bound is the maximum of any multivariate copulas, there is no minimum copula available for dimensions d ≥ 3. Therefore, multivariate minimization problems with respect to a copula are not straightforward as the corresponding maximization problems. When the minimum copula is absent, minimal copulas are useful for multivariate minimization problems. We illustrate the motivation of generalizing the joint mixability to d-countermonotonicity defined in Lee and Ahn (2014) through variance minimization problems and show that d-countermonotonic copulas are minimal copulas. Keywords: Countermonotonicity, Comonotonicity, Minimal Copula, Variance Minimization JEL Classification: C100



Corresponding Author Email addresses: [email protected] (Woojoo Lee), [email protected] (Ka Chun Cheung), [email protected] (Jae Youn Ahn) Preprint submitted to Elsevier

November 7, 2016

1. Introduction Consider a d-dimensional random vector X = (X1 , . . . , Xd ) having H : Rd 7→ [0, 1] as its distribution function and F1 , . . . , Fd as univariate marginal distributions. Define Fd (F1 , · · · , Fd ) as the Fr´echet space of d-variate random vectors having univariate marginal distributions F1 , · · · , Fd . When F1 , . . . , Fd are the uniform[0, 1] marginal distributions, the corresponding Frechet space is denoted by Fd . Therefore, we have H ∈ Fd (F1 , · · · , Fd ), equivalently, denote X ∈d Fd (F1 , · · · , Fd ).

Fr´echet space Fd (F1 , . . . , Fd ) has the maximum element known as the Fr´echet upper bound. The Fr´echet upper bound represents the concept of extreme positive dependence and has been extensively studied and applied in actuarial science under the name of comonotonicity (Dhaene et al., 2002, 2006; Cheung, 2008; Cheung and Lo, 2013). However, with regard to the Fr´echet lower bound corresponding to the concept of extreme negative dependence, things are not as straightforward. While the Fr´echet lower bound is a lower bound for the Fr´echet space, it is not generally admissible in Fd (F1 , . . . , Fd ). Joe (1990) gave a necessary and sufficient condition for the Fr´echet lower bound (d ≥ 3) to be admissible in Fd (F1 , . . . , Fd ): For all (x1 , . . . , xd ) ∈ Rd with 0 < Fi (xi ) < 1, if either d X i=1

or

d X i=1

Fi (xi ) ≤ 1

Fi (xi ) ≥ d − 1,

the Fr´echet lower bound (d ≥ 3) is a proper distribution in Fd (F1 , . . . , Fd ). Mathematical properties of the lower bound and its applications to actuarial context were further studied in Dhaene and Denuit (1999) and Cheung and Lo (2014) under the name of mutual exclusivity. However, the above two conditions impose severe restrictions on marginal distributions, so many interesting distribution functions are eliminated. Therefore, various problems of modeling extreme negative dependence have been tackled differently within a specific context. Some concrete forms of extreme negative dependence are observed in the following optimization problem. Starting from the antithetic variates method in Monte Carlo simulation, the variance minimization problem of the sum of random variables X1 + · · · + Xd (1) with given marginal distributions has been widely studied in statistics and insurance (R¨uschendorf and Uckelmann, 2002; Knott and Smith, 2006). Especially, since the variance of the sum can be minimized if P (X1 + . . . + Xd = c) = 1, for some constant c ∈ R,

finding marginal distributions with a constant sum is of interest. After being proposed by Wang and Wang (2011), the marginal distributions of random variables with a constant sum are extensively studied in Puccetti 2

and Wang (2015) and Wang and Wang (2016) under the name of complete mixability or joint mixability. The former require that X1 , . . . , Xd have the same marginal distribution, while the latter allows them to have different marginal distributions. Recently, Lee and Ahn (2014) introduced a general concept of negative extreme dependence called d-countermonotonicity (d-CTM). While, originally, Lee and Ahn (2014) use d-CM to denote d-countermonotonicity, this paper use d-CTM to avoid the confusion with the abbreviation for completely mixability in Wang and Wang (2011). By definition, the set of d-CTM copulas includes the complete and joint mixability concepts (Puccetti and Wang, 2015). Since d-CTM copulas achieve the minimum of various concordance measures, it was conjectured that d-CTM copulas are also useful for finding solutions in other contexts such as the variance minimization problem. Before exploiting the usefulness of the d-CTM copulas in various optimization problems, one must consider how to approach the optimization problem defined on the set that does not have a minimum element. For this issue, we share the view of Ahn (2015) where minimal elements play an important role in the optimization problem. Therefore, in this paper, we first define the minimal copulas in terms of the concordance order and show that d-CTM copulas are truly the minimal copulas. Knowing that d-CTM copulas are minimal copulas, we wonder whether there are other minimal copulas. Section 5 provides some examples of minimal copulas that are not d-CTM. Hence, we know that d-CTM is just one subset of minimal copulas. Nonetheless, the set of d-CTM copulas is a meaningful subset of minimal copulas, as shown in Lee and Ahn (2014). The rest of this paper is organized as follows. In Section 2 and 3, notations and preliminary results about d-CTM are given. In Section 4, the motivation of generalizing the joint mixability to d-CTM is illustrated through variance minimization problems. In Section 5, we show that a d-CTM copula is a minimal copula, and some further extensions are shown in Section 6, followed by our conclusion. 2. Notations and Preliminary Results Throughout this paper, sets [a, b] × [a, b] · · · × [a, b](⊂ Rd ),

and (a, b) × (a, b) · · · × (a, b)(⊂ Rd )

are denoted by [a, b]d and (a, b)d , respectively. We also denote x = (x1 , x2 , · · · , xd ) and

u = (u1 , · · · , ud )

as constant vectors in Rd and [0, 1]d , respectively. Especially with 0 and 1, we denote (0, · · · , 0) ∈ Rd and (1, · · · , 1) ∈ Rd , respectively. Two observations, x, y ∈ Rd , are said to be concordant if x1 < y1 , · · · , xd < yd

or

x1 > y1 , · · · , xd > yd .

Especially, we use x ≤ y to denote xi ≤ yi for all i = 1, · · · , d. Let X = (X1 , X2 , · · · , Xd )

be a d-variate random vector defined on a probability space (Ω, F, P ). The joint distribution function of X is denoted by H(x) := P (X1 ≤ x1 , · · · , Xd ≤ xd ), for x ∈ Rd ,

3

and, for s ∈ R, the marginal distribution function of Xi is denoted as Fi (s) = P (Xi ≤ s),

for i = 1, · · · , d.

We assume that Fi (s) is continuous throughout the paper unless specified. According to Sklar (1959), given H, there is a unique function C : [0, 1]d → [0, 1] that satisfies H(x) = C(F1 (x1 ), · · · , Fd (xd )).

(2)

The function C is called a d-dimensional copula and is known to be a distribution function on [0, 1]d . For more information on copulas, see Cherubini et al. (2004), Nelsen (2006) or Joe (2014). Here, we assume that the random vector U = (U1 , · · · , Ud ) has a copula C ∈ Fd as its distribution function; i.e., P (U ≤ u) = C(u),

for u ∈ [0, 1]d .

b as its joint survival function. Lastly, we say a copula C is smaller For a given copula C ∈ Fd , denote C ∗ than the distribution function C in concordance order, C ≺ C ∗ , if C(u) ≤ C ∗ (u) and

b c∗ (u) C(u) ≤C

for all u ∈ [0, 1]d

equivalently, we say the random vector U is smaller than the random vector U ∗ in concordance order if C ≺ C ∗. 3. Review of Countermonotonicity and Minimal Copulas 3.1. Countermonotonicity Countermonotonicity as a counter part of comonotonicity has gained popularity in actuarial science and finance (Dhaene et al., 2002; Cheung and Lo, 2013). It may be particularly useful to assess the aggregate risk of dependent financial assets (Cheung et al., 2014). First, we review the formal definition of countermonotonicity. Definition 1. A set A ⊂ R2 is countermonotonic if the following inequality holds (x1 − y1 )(x2 − y2 ) ≤ 0

for all x, y ∈ A.

X is countermonotonic if it has countermonotonic support. For the bivariate countermonotonic random vector X ∈d F2 (F1 , F2 ), we know that the copula of X is uniquely expressed as C(u) = max{u1 + u2 − 1, 0},

where (u1 , u2 ) ∈ [0, 1]2 .

It is well known that every d-dimensional copula C ∈ Fd is bounded in the following sense: W (u) ≤ C(u) ≤ M (u), 4

where W (u) := max{u1 + · · · + ud − (d − 1)}

and M (u) := min{u1 , · · · , ud },

which are called Fr´echet-Hoeffding lower bounds and Fr´echet-Hoeffding upper bounds, respectively. W is the countermonotonic copula for d = 2, it is not a copula in general for d ≥ 3. In fact, it is well known that there is no minimum copula for d ≥ 3; see Kotz and Seeger (1992); Joe (2014) for details. In the absence of the minimum copula, Lee and Ahn (2014) introduced an alternative concept of the extreme negative dependence called d-CTM, which is known to be useful in some optimization problems. As shown in Lemma 1 in Lee and Ahn (2014), d-CTM does not depend on marginal distributions. Hence, it is often convenient to study d-CTM in terms of copulas. For this, we provide the following version of d-CTM definition. Definition 2. A set S ⊆ [0, 1]d is called d-CTM set if there exist strictly increasing continuous functions g1 , · · · , gd defined on [0, 1] such that ( ) d X gi (ui ) = 1 . S = u ∈ [0, 1]d i=1

Definition 3. A d-variate random vector U ∈d Fd is called d-CTM if there exists a d-CTM set S ⊆ [0, 1]d such that P (U ∈ S) = 1. Equivalently, we say that the copula of U is d-CTM if U is d-CTM.

Note that when d = 2, this definition coincides with countermonotonicity given in Definition 1 as shown in Theorem 2 of Lee and Ahn (2014). Notation 1. In order to show explicitly the dependency of d-CTM on the strictly increasing functions on [0, 1],

g1 , · · · , gd

we say that U is d-CTM with parameter functions (g1 , · · · , gd ) if P (U ∈ S) = 1 for some d-CTM set S given in Definition 2. Note that Definition 3 only requires the existence of parameter functions. The parameter functions need not be unique nor uniquely define the d-CTM copula, as will be shown in Example 1 in Section 5. Using Definition 3 and Notation 1, we restate the key theorem in Lee and Ahn (2014), which claims that d-CTM is a minimal class of copulas, in the following sense. Theorem 1 (Lee and Ahn (2014)). Let the random vectors U and U ∗ be random vectors whose distribution functions are copulas C and C ∗ , respectively. Further assume that U is d-CTM with parameter functions (g1 , · · · , gd ). If we consider the random vector U ∗ satisfying U ∗ ≺ U , then U ∗ is also d-CTM with parameter functions (g1 , · · · , gd ). The case where a subset of U satisfies Definition 3 is also interesting in practice.

5

Definition 4 (Lee and Ahn (2014)). For any integer m satisfying d > m ≥ 2, a random vector U is called partially m-CM if there exists a subset, {i1 , · · · , im }(⊆ {1, · · · , d}) with i1 < · · · < im such that (Ui1 , Ui2 , · · · , Uim ) is m-CM. In the study of d-CTM, we found it convenient to use the concept of jointly mixability (or completely mixability), because both of their definitions are based the constant sums. However, two definitions can be distinguished in that d-CTM is the concept based purely on copula while joint mixability is the concept based on marginal distributions. The formal definition of joint mixability is given as follows. Definition 5 (Wang and Wang (2011, 2016)). Distribution functions F1 , · · · , Fd are jointly mixable if there exist d random variables Xi ∼ Fi for i = 1, · · · , d

such that

P

d X

!

Xi = c

i=1

=1

for some c ∈ R. Such c is called a joint center of (F1 , · · · , Fd ), and the random vector X is called a joint mix. Especially, if F1 , · · · , Fd are jointly mixable and F1 = · · · = Fd , we call F1 , · · · , Fd , are completely mixable. Remark 1. It is interesting to investigate the existence of d-CTM copulas for a given parameter functions, say (g1 , · · · , gd ). By definition, the existence of d-CTM copulas for a given parameter functions (g1 , · · · , gd ) is equivalent with investigating whether F1 , · · · , Fd are jointly mixable or not, where Fi is defined as the distribution function of a random variable gi (U ). Since not all combinations of marginal distributions functions are compatible with joint mixability, not all parameter functions allow the existence of d-CTM copulas. The details on the combination of parameter functions (marginal distributions functions) allowing d-CTM can be found in Wang and Wang (2016). The following simple version of d-CTM (hence, the simple version of jointly mixability)is known to be useful for some minimization problems (Lee and Ahn, 2014). Definition 6 (Lee and Ahn (2014)). A d-variate random vector U is strict d-CTM if   d X 2 P Ui = 1 = 1. d j=1

Equivalently, we say that H is strict d-CTM if U is strict d-CTM.

It is obvious that strict d-CTM is a d-CTM with parameter functions g1 (v) = · · · = gd (v) =

2 ·v d

(3)

for v ∈ [0, 1]. The existence of a strict d-CTM copula was shown in R¨uschendorf and Uckelmann (2002). 6

3.2. Minimal Copulas and Optimization Problems In this subsection, we define the minimal copulas in terms of concordance ordering. Although concordance ordering is a partial ordering because not every pair of copulas is comparable in this order, it is one of the most useful dependence orderings and has been reflected in various dependence measures such as Kendall’s tau and Spearman’s rho. First, we start with the definition of minimal copulas. Definition 7. Let U be a random vector having a copula C ∈ Fd as a distribution function. Then, we call C a minimal copula if the inequality C∗ ≺ C

for some d-dimensional copula C ∗ implies C ∗ = C. Equivalently, we call U minimal if C is a minimal copula. Various optimization problems are involved with concordance order. For example, finding the minimum of concordance measures such as Kendall’s tau or Spearman’s rho is related to the existence of a minimal copula that minimize the measures. Tankov (2011) studied a class of functionals on bivariate distributions that preserve concordance order. In such cases, studying the minimal copulas is important, as they can provide the solution for the optimization problems. In order to formulate the above optimization problems generally, consider a functional κ : Fd 7→ R that preserves the concordance order in the following sense: κ(C ∗ ) ≤ κ(C) for any copula C, C ∗ ∈ Fd satisfying

C ∗ ≺ C.

Then, a concordance optimization problem can be defined as min κ(C)

C∈Fd

arg min κ(C).

or

(4)

C∈Fd

Furthermore, (4) is called a strict concordance optimization problem if κ : Fd 7→ R preserves the strict concordance order, that is, κ(C ∗ ) < κ(C) for any copula C, C ∗ ∈ Fd satisfying C∗ ≺ C

and

C ∗ 6= C.

One of interesting concordance optimization problems is the variance minimization problem. Given marginal distributions F1 , · · · , Fd , define κ(C) := Var (X1 + · · · + Xd )

with (X1 , · · · , Xd ) ∼ C(F1 , · · · , Fd ).

Because κ(C) preserves a strict concordance order when the marginal distributions are continuous, the following variance minimization problem min

H∈Fd (F1 ,··· ,Fd )

Var (X1 + · · · + Xd ) = min κ(C) C∈Fd

7

(5)

becomes a strict optimization problem. Hence, only minimal copulas can provide the solution of this problem. In particular, Ahn (2015) showed that d-CTM copulas provide the solution of (5) for uniform marginal distributions. In Section 4, we generalize the result in Ahn (2015) into more general marginal distributions including unimodal-symmetric location-scale family and elliptical distributions. 4. Motivation from the Variance Minimization Problem In this section, a motivation of generalizing the joint mixability to d-CTM is provided through the variance minimization problem (5). Note that the simplest case is when F1 , . . . , Fd is joint mixable, which is a special case of d-CTM (Wang and Wang, 2016). But, when F1 , . . . , Fd is not joint mixable, the solution is not straightforward at all. We show that d-CTM copula is the solution of the variance minimization problem for two non-trivial cases where marginal distributions are not joint mixable. First, we provide the following lemma, which is the key part for the proof of the two non-trivial cases. Lemma 1. Let Yi , i = 1, · · · , d be a random variable having Var (Yi ) = σi2 . Define a > 0 such that σ1 a =

d X

(6)

σi .

i=2

Then, we have the following inequality "

cov Y1 , a Y1 +

d X i=2

#

Yi ≥ 0,

(7)

where the equality holds if and only if Var a Y1 +

d X

Yi

i=2

!

= 0.

(8)

Proof. Since corr [Y1 , Yi ] ≥ −1

for any i = 1, · · · , d, first, we have the following inequality " # " # d d X X cov Y1 , a Y1 + Yi = σ1 aσ1 + σi corr [Y1 , Yi ] i=2

i=2

≥ 0.

Now it remains to show that the equality in (7) holds if and only if (8) holds. First, observe that (8) trivially implies " # d X cov Y1 , a Y1 + Yi = 0. (9) i=2

8

To show the other direction, assume (9). Then ! " d # d d X X X Var a Y1 + Yi = cov Yi , a Y 1 + Yi i=2

= Var

i=2 d X

Yi

i=2

!

 d X =  σi2 + 2 i=2

i=2

− Var (a Y1 ) X

2≤i
(10) 

σi σj corr [Yi , Yj ] −

d X

σi

i=2

!2  

where the first and the second equalities come from assumption (9), and the last equality comes from the straightforward calculation from the definitions of a and Yi . Since no correlation coefficient can be larger than 1 by definition, (10) implies ! d X Var a Y1 + Yi ≤ 0, i=2

which in turn implies (8).

4.1. Unimodal-Symmetric Location-Scale Family Marginally, let d

Yi = µi + σi X

(11)

be in the location-scale family of continuous random variables X, which is unimodal-symmetric with finite variance, for some µi ∈ R and σi > 0. Further, let Fi be the distribution function of Yi . Without loss of generality, we assume that σ1 = max{σ1 , · · · , σd }. For x ∈ Rd+ , a function g is defined as g(x) :=

  max{x1 , · · · , xd }; 

if 2 · max{x1 , · · · , xd } ≤

d P    xi − max{x1 , · · · , xd }; i=1

if 2 · max{x1 , · · · , xd } >

Theorem 2. Let Yi , i = 1, · · · , d be location-scale family defined in (11) and σ ∗ = d X

Var

i=1

Yi

!

≥ (1 − g(σ ∗ ))2 Var (Y1 )

d P

i=1 d P

xi xi .

i=1

1 σ1 σ.

Then (12)

where the equality holds if and only if ∗

Var g(σ )Y1 +

d X i=2

Yi

!

= 0.

Further, there exists a d-CTM random vector Y satisfying (13) in Fd (F1 , · · · , Fd ).

9

(13)

Proof. First note that g(σ ∗ ) =

We first consider the case σ1 ≤

d P

   1,

if σ1 ≤

d P    σ1 σi , if σ1 > 1 i=2

d P

i=2 d P

σi . σi

i=2

σi . Since g(σ ∗ ) = 1, inequality (12) is obvious. Further, Corollary 3.6

i=2

in Wang and Wang (2016) implies the existence of Y ∈d Fd (F1 , · · · , Fd ) satisfying (13), and clearly Y is d-CTM. d P Now consider the case σ1 > σi . First, observe that i=2

1 − g(σ ∗ ) > 0,

(14)

and a = g(σ ∗ ) with a > 0 satisfies equation (6); i.e., d X p p Var (X)σ1 a = Var (X) σi .

(15)

i=2

Then, the following expression Var

d X i=1

Yi

!





= Var (1 − g(σ ))Y1 + g(σ )Y1 +

d X

Yi

i=2

= Var ((1 − g(σ ∗ ))Y1 ) + Var g(σ ∗ )Y1 + "





+ 2 cov (1 − g(σ ))Y1 , g(σ )Y1 +

! d X

i=2 d X

Yi

Yi

i=2

!

#

together with (14), (15) and Lemma 1 conclude ! d X Var Yi ≥ Var ((1 − g(σ ∗ ))Y1 ) i=1

where the equality holds if and only if (Y1 , Y2 , · · · , Yd ) ∈d Fd (F1 , F2 , · · · , Fd ) is d-CTM satisfying ! d X Var aY1 + Yi = 0. i=2

Furthermore, the existence of (aY1 , Y2 , · · · , Yd ) ∈d Fd (F1∗ , F2 , · · · , Fd ), where F1∗ (x) := F1 (x/a) for x ∈ R, is guaranteed by Corollary 3.6 in Wang and Wang (2016). Remark 2. Note that when σ1 >

d P

σi , the minimum variance is obtained if and only if the random vector

i=2

10

is d-CTM rather than joint mixability. While the results from Wang and Wang (2016) help us to establish the existence of (aY1 , Y2 , · · · , Yd ) having a constant sum, it is d-CTM that characterizes the minimality of the variance of the sum. 4.2. Elliptical Marginal Distributions A multivariate distribution is said to be d-elliptical Ed (µ, Σ, φ) if its characteristic function is of the form 0 eit µ φ(t0 Σt) for µ ∈ Rd , with a positive-definite d × d matrix Σ, a characteristic generator φ, and i is the imaginary unit. An equivalent condition for φ to be a characteristic generator of a d-elliptical distribution and further properties of elliptical distributions can be found in Cambanis et al. (1981) and Landsman and Valdez (2003) among many references.  d Theorem 3. Suppose Xi = E1 µi , σi2 , φ , i = 1, · · · , d with µi ∈ R, σi > 0 and |φ0 (0)| < ∞. For convenience, assume that σ1 = max{σ1 , · · · , σd }, and let σ ∗ = σ11 σ. Then d X

Var

Xi

i=1

!

≥ (1 − g(σ ∗ ))2 Var (X1 )

where the equality holds if and only if Var g(σ ∗ )X1 +

d X i=2

Xi

!

= 0.

(16)

Further, there exists a d-CTM random variable X satisfying (16) in Fd (F1 , · · · , Fd ). Proof. Using Theorem 3.6 of Wang and Wang (2016), Lemma 1 and the following relation in Landsman and Valdez (2003) Var (Xi ) = −φ0 (0)σi2 we can finish the proof with similar logic as in the proof of Theorem 2.

5. Minimal Copulas and d-CTM Copulas In this section, we study the relationship between d-CTM copulas and minimal copulas. While Theorem 1 shows that d-CTM copulas constitute a minimal class of copulas, we note that copulas satisfying d-CTM with parameter functions (g1 , · · · , gd ) are not unique, as Example 1 below shows. Hence, Theorem 1 alone cannot conclude that d-CTM copula is a minimal copula. Example 1. Let V and V ∗ be independent uniform[0, 1] random variables. Further define strict 4-CM random vectors as U = (V, V ∗ , 1 − V, 1 − V ∗ )

and

U ∗ = (V, 1 − V, V ∗ , 1 − V ∗ ).

While U and U ∗ are both 4-CM with (g1 , · · · , g4 ), where (g1 , · · · , g4 ) are defined as in (3), it is obvious d

d

that U 6= U ∗ nor U ∗ 6= U .

11

For independent interest, we also note that the parameter functions for d-CTM vector U is not unique. For example, U is also d-CTM with parameter functions f1 , · · · , fd defined as   1 3 1 f1 (v) = · · · = f4 (v) = v − + 2 4

for v ∈ [0, 1].

Example 1 shows that d-CTM copulas may not be unique for given parameter functions. However, it does not imply that a d-CTM copula is not a minimal copula. Hence, the following question is still unanswered. If C is d-CTM copula, is C a minimal copula?

Theorem 4 provides the answer the above question, and shows that d-CTM copulas are minimal copulas. Before we state the theorem, we provide relevant definitions and lemmas. Notation 2. Define the support of the parameter functions of (f1 , · · · , fd ) o n A(f1 , · · · , fd ) := u ∈ [0, 1]d f1 (u1 ) + · · · + fd (ud ) = 1

where parameter functions are defined in Definition 3 and Notation 1. Define the box generated by a := (a1 , · · · , ad ) ∈ Rd

and b := (b1 , · · · , bd ) ∈ Rd

as and

B[a, b] := [a1 , b1 ] × · · · × [ad , bd ]. B(a, b) := (a1 , b1 ) × · · · × (ad , bd ).

Lemma 2 and Lemma 3 address the key properties that make the d-CTM copula a minimal copula. Lemma 2. Let f1 , · · · , fd be parameter functions for some d-CTM copulas as defined in Definition 3 and Notation 1, and A := A(f1 , · · · , fd ). Let O ⊆ [0, 1]d be any open set such that O ∩ A = 6 φ. Then, for any x ∈ O ∩ A, there exist y, y ∗ ∈ [0, 1]d satisfying ( x ∈ B(0, y) B[0, y] ∩ A ⊆ O ∩ A. and

( x ∈ B(y ∗ , 1) B[y ∗ , 1] ∩ A ⊆ O ∩ A.

Proof. First, since fi ’s are strictly increasing function on [0, 1] by definition, note that fi−1 ’s are well defined. We only give the proof for the existence of y. The proof is similar for y ∗ . Since O is an open set, there exists  > 0 such that B(x − , x + ) ⊆ O, 12

where  := (, · · · , ) ∈ Rd . Define δ := (δ, · · · , δ) ∈ Rd . Now, for given  ∈ Rd , we will explain how to choose a proper δ satisfying 0 < δ <  and B[0, x + δ] ∩ A ⊆ B(x − , x + ). For given δ ∈ (0, ) and any z ∈ B[0, x + δ] ∩ A, the first component of z, z1 , can be expressed as z1 = f1−1 (1 − f2 (z2 ) − · · · − fd (zd )) ≥ f1−1 (1 − f2 (x2 + δ) − · · · − fd (xd + δ)) ,

(17)

where the inequalities hold because f1−1 (1 − f2 (x2 + δ) − · · · − fd (xd + δ)) is a decreasing function of δ for 0 ≤ δ < . Since f1−1 (1 − f2 (x2 + δ) − · · · − fd (xd + δ)) is also continuous in δ for 0 ≤ δ < , we have lim f1−1 (1 − f2 (x2 + δ) − · · · − fd (xd + δ)) = x1 ,

δ&0

which along with (17), implies the existence of δ = δ (1) > 0 such that z1 > x1 −  for any z ∈ B[0, x + δ] ∩ A. Similarly, there exists δ = δ (i) > 0 such that zi > xi − . Hence, we can conclude B[0, x + δ] ∩ A ⊆ B(x − , x + ),

(18)

 where δm = min δ (1) , · · · , δ (d) and δ min := (δm , · · · , δm ) ∈ Rd+ . If we define y = x + δ min , since y > x, we surely have x ∈ B(0, y), which concludes the proof with (18).

Lemma 3. Let f1 , · · · , fd be parameter functions for some d-CTM copulas as defined in Definition 3 and Notation 1, and A := A(f1 , · · · , fd ). Further, let O ⊆ [0, 1]d be any open set such that O ∩ A 6= φ. Then there exists a set N ⊆ [0, 1]d that is at most a countable set, such that [ O∩A= B[0, y] ∩ A (19) y∈N

where B[0, y] ∩ A with different y ∈ N are disjoint. Similarly, there exists a set N ∗ ⊆ [0, 1]d that is at most a countable set, such that [ O∩A= B[y, 1] ∩ A (20) y∈N

where B[y, 1] ∩ A with different y ∈ N ∗ are disjoint.

Proof. Since the proof of (19) and (20) are almost the same, we only prove (19). Define  Q := x ∈ A x1 , · · · , xd−1 are rational numbers .

Note that Q is at most countable. Further, it is easy to show that Q is dense in A, because i. the points whose elements are rational numbers are dense in I d−1 13

ii. x ∈ A can be expressed as xd = fd−1 (1 − f1 (x1 ) − · · · − fd−1 (xd−1 )). Since Q is countable, there exists a sequence q (i) ∈ Q such that n o Q = q (i) ∈ A i ∈ N+ .

Let x(1) ∈ O(1) ∩ Q, where O(1) := O. Since Q ⊆ A, and hence x(1) ∈ O(1) ∩ Q ⊆ O(1) ∩ A, Lemma 2 implies the existence of y (1) ∈ [0, 1]d such that (  x(1) ∈ B 0, y (1)   B 0, y (1) ∩ A ⊆ O(1) ∩ A. Further, define

for given k ∈ N+ and define

h i O(k+1) := O(k) \ B 0, y (k) , x(k+1) := q (n) ,

where n ∈ N+ is defined as the smallest i ∈ N+ such that q (i) ∈ O(k+1) ∩ Q. Since O(k+1) is an open set and x(k+1) ∈ O(1) ∩ Q ⊆ O(k+1) ∩ A, there exists y (k+1) ∈ O(k+1) such that (  x(k+1) ∈ B 0, y (k+1)   B 0, y (k+1) ∩ A ⊆ O(k+1) ∩ A. Repeating the above procedure countably many times guarantees that [ O∩Q⊆ B[0, y (i) ] ∩ A, i∈N

where N is at most a countable subset of N+ . Furthermore, we note that B[0, y (i) ] ∩ A for i ∈ N are disjoint sets by definition. Since Q is dense in A, for any z ∈ A, there exists z 0 ∈ Q such that z, z 0 ∈ B(0, y (i) ) ⊆ B[0, y (i) ]. Hence, we conclude that O∩A⊆

[

i∈N

B[0, y (i) ] ∩ A,

which finishes the proof. Note that the other inclusion is trivial by the definition of B[0, y (i) ]. Now Lemma 2 and 3 can be used to conclude that d-CTM copulas are minimal copulas, as shown in the following theorem. 14

Theorem 4. Suppose that U and U ∗ are random vectors having copulas C, C ∗ ∈ Fd as distribution functions, respectively. Let U be d-CTM with (f1 , · · · , fd ). If we assume that U∗ ≺ U, then

d

U∗ = U. d

Proof. If U ∗ 6= U , simple argument shows the existence of a, b ∈ [0, 1]d satisfying P (U ∈ B(a, b)) < P (U ∗ ∈ B(a, b)) .

(21)

On the other hand, by Lemma 3, there exists a countable subset N ⊆ [0, 1]d such that B(a, b) ∩ A = ∪y∈N B[0, y] ∩ A where B[0, y] ∩ A for y ∈ N are disjoint. Hence, we have P (U ∈ B(a, b)) = P (U ∈ B(a, b) ∩ A)

= P (U ∈ ∪y∈N B[0, y] ∩ A) X = P (U ∈ B[0, y] ∩ A)

(22)

y∈N

=

X

y∈N

P (U ∈ B[0, y])

where the third equality holds because B[0, y] ∩ A for y ∈ N are disjoint, and N is countable. Similarly, we also have X P (U ∗ ∈ B(a, b)) = P (U ∗ ∈ B[0, y]) . (23) y∈N

Since

P (U ∗ ∈ B[0, y]) ≤ P (U ∈ B[0, y]) ,

combining with (22) and (23), we have

P (U ∗ ∈ B(a, b)) ≤ P (U ∈ B(a, b)) d

which contradicts with (21). Hence, we conclude that U ∗ = U .

6. Partially d-CTM and Minimal Copula So far, we have shown that a d-CTM copula is a minimal copula. In other words, we found a sufficient condition for minimal copulas. Now a natural question is to ask whether d-CTM is a necessary condition for minimal copulas. Example 2 below shows an example where C is a minimal but not d-CTM. Before we proceed to the example, we need the following lemma whose proof can be found in Appendix.

15

Lemma 4. Let U = (U1 , U2 , U3 ) be a random vector having the copula C ∈ F3 , as the distribution function. Furthermore, assume that (U1 , U2 ) is countermonotonic. Then, for any copula C ∗ ∈ F3 satisfying C ∗ ≺ C, we have

d

C ∗ = C. The following example, with the help of Lemma 4, constructs a copula C that is a minimal copula but not d-CTM. Example 2. Let V and V ∗ be independent uniform(0, 1) random variables, and C to be a copula of the vector U := (V, 1 − V, V ∗ ). Now we show that C is not d-CTM. Since the first two components in U are countermonotonic, U has a minimal copula. However, this cannot be d-CTM as will be explained as follows. Suppose that U is d-CTM. Then, by definition, there exist some strictly increasing functions satisfying f1 (V ) + f2 (1 − V ) + f3 (V ∗ ) = 1. Then, V ∗ can be expressed uniquely in terms of V . This is contradictory to the independence assumption of V and V ∗ . The following lemma generalizes Lemma 4 by replacing the countermonotonicity with d-CTM. Lemma 5. If U is d-CTM, then (U , V ) has a minimal copula for any uniform[0, 1] distributed random variable V . Proof. Assume that a (d + 1)-variate random vector (U ∗ , V ∗ ) having uniform [0, 1] marginal distributions satisfies (U ∗ , V ∗ ) ≺ (U , V ). (24) First, observe that (24) and Theorem 4 imply

d

U∗ = U. If we assume

(25)

d

(U ∗ , V ∗ ) 6= (U , V )

then simple argument shows the existence of

a0 := (a1 , · · · , ad+1 ) and

b0 := (b1 , · · · , bd+1 ) ∈ I d+1

satisfying P (U ∗ ∈ B(a, b), V ∗ ∈ [ad+1 , bd+1 ]) > P (U ∈ B(a, b), V ∈ [ad+1 , bd+1 ]) .

16

(26)

Now, we have P (U ∗ ∈ B(a, b), V ∗ ∈ [ad+1 , bd+1 ]) = P (U ∗ ∈ B(a, b), V ∗ ∈ [ad+1 , 1]) + P (U ∗ ∈ B(a, b), V ∗ ∈ [0, bd+1 ]) − P (U ∗ ∈ B(a, b))

= P (U ∗ ∈ B(a, b), V ∗ ∈ [ad+1 , 1]) + P (U ∗ ∈ B(a, b), V ∗ ∈ [0, bd+1 ]) − P (U ∈ B(a, b))

≤ P (U ∈ B(a, b), V ∈ [ad+1 , 1]) + P (U ∈ B(a, b), V ∈ [0, bd+1 ]) − P (U ∈ B(a, b))

= P (U ∈ B(a, b), V ∈ [ad+1 , bd+1 ]) where the second equality holds from (25) and the last inequality can be derived from Lemma 3. Now (26) and (21) give a contraction, which concludes d

(U ∗ , V ∗ ) = (U , V ).

Lemma 5 shows that every d + 1 variate random vector satisfying partially d-CTM is a minimal copula. A natural question would be Is every d + m variate random vector satisfying partially d-CTM a minimal copula for m = 2, 3, · · · ? Unfortunately, the answer to this question is negative, as shown in the next example. Example 3. Let V , V ∗ , and V ∗∗ be independent uniform[0, 1] random variables, and define a random vector U := (V, 1 − V, V ∗ , V ∗ ). Since (U1 , U2 ) = (V, 1 − V ) is countermonotonic, U is partially 2-CM. Furthermore, if we define U ∗ := (V, 1 − V, V ∗ , V ∗∗ ) then the following inequalities, P (U ∗ ≤ u) = P ((V, 1 − V ) ≤ (u1 , u2 )) P ((V ∗ , V ∗∗ ) ≤ (u3 , u4 )) ≤ P ((V, 1 − V ) ≤ (u1 , u2 )) P ((V ∗ , V ∗ ) ≤ (u3 , u4 )) = P (U ≤ u) ,

and

P (U ∗ ≥ u) = P ((V, 1 − V ) ≥ (u1 , u2 )) P ((V ∗ , V ∗∗ ) ≥ (u3 , u4 )) ≤ P ((V, 1 − V ) ≥ (u1 , u2 )) P ((V ∗ , V ∗ ) ≥ (u3 , u4 )) = P (U ≥ u) ,

conclude d

U∗ ≺ U.

Since U ∗ 6= U , we conclude that U is not a minimal copula. 17

7. Conclusions In this paper, we illustrated that d-CTM copulas are useful in the variance minimization problem which heavily depends on the marginal distributions. Two non-trivial examples are given as the motivation of generalizing the joint mixability to d-CTM. The main result is that d-CTM copulas are minimal copulas in concordance ordering, which is a generalization of Lee and Ahn (2014) and Ahn (2015). For future works, it will be interesting to study the role of the d-CTM copulas for general concordance optimization problems. Acknowledgements For Woojoo Lee, this work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF2013R1A1A1061332). For Ka Chun Cheung, this work was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKU701213), and the CAE 2013 research grant from the Society of Actuaries. Any opinions, finding, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the SOA. For Jae Youn Ahn, this work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (NRF-2013R1A1A1076062). References Ahn, J. Y. (2015). Negative dependence concept in copulas and the marginal free herd behavior index. Journal of Computational and Applied Mathematics, 288(0):304 – 322. Cambanis, S., Huang, S., and Simons, G. (1981). On the theory of elliptically contoured distributions. Journal of Multivariate Analysis, 11(3):368–385. Cherubini, U., Luciano, E., and Vecchiato, W. (2004). Copula methods in finance. Wiley Finance Series. John Wiley & Sons Ltd., Chichester. Cheung, K. C. (2008). Characterization of comonotonicity using convex order. Insurance: Mathematics and Economics, 43(3):403–406. Cheung, K. C., Dhaene, J., Lo, A., and Tang, Q. (2014). Reducing risk by merging counter-monotonic risks. Insurance: Mathematics and Economics, 54(0):58 – 65. Cheung, K. C. and Lo, A. (2013). Characterizations of counter-monotonicity and upper comonotonicity by (tail) convex order. Insurance: Mathematics & Economics, 53(2):334–342. Cheung, K. C. and Lo, A. (2014). Characterizing mutual exclusivity as the strongest negative multivariate dependence structure. Insurance: Mathematics & Economics, 55:180–190. Dhaene, J. and Denuit, M. (1999). The safest dependence structure among risks. Insurance: Mathematics & Economics, 25(1):11–21. Dhaene, J., Denuit, M., Goovaerts, M. J., Kaas, R., and Vyncke, D. (2002). The concept of comonotonicity in actuarial science and finance: Theory. Insurance: Mathematics and Economics, 31(1):3–33.

18

Dhaene, J., Vanduffel, S., Goovaerts, M. J., Kaas, R., Tang, Q., and Vyncke, D. (2006). Risk measures and comonotonicity: a review. Stochastic Models, 22(4):573–606. Joe, H. (1990). Multivariate concordance. Journal of Multivariate Analysis, 35(1):12–30. Joe, H. (2014). Dependence Modeling with Copulas. CRC Press. Knott, M. and Smith, C. (2006). Choosing joint distributions so that the variance of the sum is small. Journal of Multivariate Analysis, 97(8):1757 – 1765. Kotz, S. and Seeger, J. P. (1992). Lower bounds on multivariate distributions with preassigned marginals. In Stochastic inequalities (Seattle, WA, 1991), volume 22 of IMS Lecture Notes Monogr. Ser., pages 211– 218. Inst. Math. Statist., Hayward, CA. Landsman, Z. M. and Valdez, E. A. (2003). Tail conditional expectations for elliptical distributions. North American Actuarial Journal, 7(4):55–71. Lee, W. and Ahn, J. Y. (2014). On the multidimensional extension of countermonotonicity and its applications. Insurance: Mathematics and Economics, 56:68–79. Nelsen, R. B. (2006). An introduction to copulas. Springer Series in Statistics. Springer, New York, second edition. Puccetti, G. and Wang, R. (2015). Extremal dependence concepts. Statistical science, 30(4):485–517. R¨uschendorf, L. and Uckelmann, L. (2002). Variance minimization and random variables with constant sum. In Distributions with given marginals and statistical modelling, pages 211–222. Kluwer Academic Publishers, Dordrecht. Sklar, M. (1959). Fonctions de r´epartition a` n dimensions et leurs marges. Institute of Statistics of the University of Paris, 8:229–231. Tankov, P. (2011). Improved Fr´echet bounds and model-free pricing of multi-asset options. J. Appl. Probab., 48(2):389–403. Wang, B. and Wang, R. (2011). The complete mixability and convex minimization problems with monotone marginal densities. Journal of Multivariate Analysis, 102(10):1344–1360. Wang, B. and Wang, R. (2016). Joint mixability. Mathematics of Operations Research.

Appendix A. For simplicity of the proof in Lemma 4, we provide the following lemma. Lemma 6. For any bivariate copulas satisfying C ∗ ≺ C, we have ν1 (C) ≺ ν1 (C ∗ ). 19

(A.1)

where operation ν1 on bivariate copulas is defined as ν1 (C)(u1 , u2 ) := u2 − C(1 − u1 , u2 ),

for u1 , u2 ∈ [0, 1] and

C ∈ F2 .

Proof. From (A.1), we have C ∗ (1 − u1 , u2 ) < C(1 − u1 , u2 )

for u1 , u2 ∈ [0, 1],

which implies Hence, we conclude

u2 − C ∗ (1 − u1 , u2 ) > u2 − C(1 − u1 , u2 ). ν1 (C ∗ )(u1 , u2 ) > ν1 (C)(u1 , u2 ).

(A.2)

Since C and C ∗ are bivariate copulas, (A.2) implies ν1 (C ∗ )(u1 , u2 ) > ν1 (C)(u1 , u2 ), which in turn concludes the proof with (A.2). The following is the proof of Lemma 4. Note that, since d-CTM coincides with countermonotonicity for d = 2, Lemma 4 can be also proved using the same technique as in the proof of Lemma 5. Proof of Lemma 4. For notational brevity, f ori = 1, 2, 3, define the operator proj(i) : F3 7→ F2 as proj(i) (C) = C(u1 , u2 , u3 ),

with ui = 1.

First, the inequality C∗ ≺ C

implies

(A.3)

proj(3) (C ∗ ) ≺ proj(3) (C),

which together with Theorem 4, implies

proj(3) (C ∗ )(u1 , u2 ) = proj(3) (C)(u1 , u2 )

for (u1 , u2 ∈ R2 ).

Since proj(3) (C ∗ ) and proj(3) (C) are bivariate countermonotonic copulas, showing C ∗ = C is equivalent to showing proj(2) (C ∗ ) = proj(2) (C) or proj(1) (C ∗ ) = proj(1) (C). Now, from the property of the bivariate countermonotonic copula, observe that ν1 (proj(2) (C ∗ )) = proj(1) (C ∗ ).

(A.4)

Further, Assumption (A.3) implies proj(2) (C ∗ ) ≺ proj(2) (C) and

proj(2) (C ∗ ) ≺ (proj(2) (C).

(A.5)

ν1 (proj(1) (C)) ≺ ν1 (proj(1) (C ∗ )),

(A.6)

Since, by Lemma 6 , (A.5) implies ν1 (proj(2) (C)) ≺ ν1 (proj(2) (C ∗ )) and 20

and, by (A.4), the concordance ordering (A.6) becomes proj(1) (C) ≺ proj(1) (C ∗ ), which concludes that C ∗ = C with (A.5).

21