Sibuya copulas

Sibuya copulas

Journal of Multivariate Analysis 114 (2013) 318–337 Contents lists available at SciVerse ScienceDirect Journal of Multivariate Analysis journal home...

2MB Sizes 23 Downloads 103 Views

Journal of Multivariate Analysis 114 (2013) 318–337

Contents lists available at SciVerse ScienceDirect

Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva

Sibuya copulas Marius Hofert a,∗ , Frédéric Vrins b a

RiskLab, Department of Mathematics, ETH Zurich, 8092 Zurich, Switzerland

b

ING Belgium SA, Brussels, Belgium

article

info

Article history: Received 24 May 2011 Available online 16 August 2012 AMS 2010 subject classifications: 60E05 62H99 60G99 62H05 62H20 Keywords: Sibuya form Stochastic processes Poisson process Archimedean copulas

abstract A new class of copulas referred to as ‘‘Sibuya copulas’’ is introduced and its properties are investigated. Members of this class are of a functional form which was first investigated in the work of M. Sibuya. The construction of Sibuya copulas is based on an increasing stochastic process whose Laplace–Stieltjes transform enters the copula as a parameter function. Sibuya copulas also allow for idiosyncratic parameter functions and are thus quite flexible to model asymmetric dependences. If the stochastic process is allowed to have jumps, Sibuya copulas may have a singular component. Depending on the choice of the process, they may be extreme-value copulas, Lévy-frailty copulas, or Marshall–Olkin copulas. Further, as a special symmetric case, one may obtain any Archimedean copula with Laplace–Stieltjes transform as generator. Besides some general properties of Sibuya copulas, several examples are given and their properties are investigated in more detail. The construction scheme associated to Sibuya copulas provides a sampling algorithm. Further, it can be generalized, for example, to allow for hierarchical structures, or for an additional source of dependence via another copula. © 2012 Elsevier Inc. All rights reserved.

1. Introduction The main goal of the present work is to construct and investigate a new, flexible, and asymmetric class of d-dimensional copulas called Sibuya copulas. Sibuya copulas arise from a specific construction scheme and naturally take on the Sibuya form C (u) = Ω (u)Π (u),

(1)

where Π (u) = j=1 uj denotes the independence copula and Ω (u) is the dependence function. As explained below, this naming convention refers to the work of M. Sibuya, who investigated bivariate distributions written as the product of the two marginals and a dependence function; see [25]. Of course, any copula can be written in Sibuya form. By contrast, the class of copulas we shall generate and analyze shares a particular parametric form for the function Ω (u). Sibuya copulas, which naturally appear in Sibuya form, are not just artificial mathematical objects. In effect, specific members of this class arise from a particular dynamic default model that has been recently proposed to value particular financial products; see [26]. Therefore, our starting point is to present a generic default model. We then derive the associated dependence structure and show that it naturally shares the Sibuya form (1): the joint survival probability has a separable structure. Indeed, it readily takes the form of the product of a jointure function (which embeds all the information about the coupling) and the product of the marginal default probabilities (which in contrast do not depend on the coupling of the entities). We then work out its main statistical properties, present detailed examples, and conclude by giving some routes for possible generalizations.

d



Corresponding author. E-mail addresses: [email protected], [email protected] (M. Hofert), [email protected] (F. Vrins).

0047-259X/$ – see front matter © 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.jmva.2012.08.007

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

319

Sibuya copulas are remarkable in many ways. In what follows, focus is put on the following important aspects. (1) A distinguishing feature compared to other copula classes is the fact that Sibuya copulas allow for functional parameters. In contrast to Archimedean copulas, for example (where the generator can be viewed as a single functional parameter), Sibuya copulas can feature up to d parameter functions. This important property leads to a broad class of flexible coupling schemes. (2) Sibuya copulas embed many known (but rather different) copulas as special cases. For example, Sibuya copulas can be Archimedean copulas, Khoudraji-transformed Archimedean copulas, extreme-value copulas, Lévy-frailty copulas, or Marshall–Olkin copulas. Additionally, new copula subclasses arise, such as a generalization of Archimedean copulas. This is due to the fact that Sibuya copulas are obtained from a quite general construction scheme. (3) As another important specificity, Sibuya copulas are not restricted to functional symmetry, that is, exchangeability. This is important in larger dimensions when exchangeability becomes a more and more restrictive assumption (which has recently led to the investigation of copula classes such as Liouville copulas, nested Archimedean copulas, or pair-copula constructions). Similarly, they may have a singular component (or not) depending on the specific construction scheme chosen. The article is organized as follows. Section 2 covers the construction of Sibuya copulas. Because of their general functional form, it seems difficult to determine specific properties of Sibuya copulas in general. We therefore present several newly constructed examples in Section 4 and investigate their properties. In particular, (non-)homogeneous Poisson Sibuya copulas and Archimedean Sibuya copulas are introduced. Section 5 briefly addresses possible extensions of the construction principle for Sibuya copulas, including a hierarchical extension via multiple stochastic processes. Finally, Section 6 concludes. For the reader’s convenience, proofs are deferred to the Appendix. 2. Construction In this section, we derive the newly proposed dependence structure (1) from Sklar’s Theorem based on a joint model. For simplicity, the model is not presented in full generality. Generalizations are given in Section 5. To make our construction of the dependence structure more intuitive (and only for this reason), we derive the copula class in terms of the notation and context of a generalization of the standard intensity-based default model. It is important to note that this is only for the purpose of deriving the copula. As for any copula model, a realistic joint model (to model defaults, for example) can then be constructed with Sklar’s Theorem by plugging in suitable margins. Before proceeding with the mathematical derivation of the dependence structure, it is probably interesting to note that our construction method is based on first passage times of stochastic processes below random barriers. Therefore, some connections can be made with well-known copulas based on the same underlying idea. Marshall–Olkin copulas, first discussed in the bivariate case in [4] and extended to the general multivariate case in [16], are one of the most prominent examples of such kinds of coupling scheme. 2.1. The basic construction principle In order to construct the new class of copulas, we consider d components in a system which are affected by defaults or deaths. We assume the jth component to have a stochastic survival process pj , defined by pj (t ) = exp(−Λj,t ),

t ∈ [0, ∞),

(2)

where (Λj,t )t ∈[0,∞) , j ∈ {1, . . . , d}, are right-continuous, increasing stochastic processes with Λj,0 = 0, j ∈ {1, . . . , d}. The corresponding default time of the jth component is then modeled via

τj = inf{t ≥ 0 : pj (t ) ≤ Uj }

(3)

for independent triggers Uj ∼ U [0, 1], j ∈ {1, . . . , d}, independent of (Λj,t )t ∈[0,∞) , j ∈ {1, . . . , d}. As a model for the processes (Λj,t )t ∈[0,∞) , j ∈ {1, . . . , d}, we consider

Λj,t = Mj (t ) + Yt ,

t ∈ [0, ∞), j ∈ {1, . . . , d},

(4)

where Mj , j ∈ {1, . . . , d}, are right-continuous, increasing functions with Mj (0) = 0, j ∈ {1, . . . , d}, and (Yt )t ∈[0,∞) is a right-continuous, increasing stochastic process satisfying Y0 = 0. Remark 2.1. (1) The setup described above is a generalization of the standard intensity-based approach for modeling defaults where default times are modeled as the first-jump times of a (possibly non-homogeneous) Poisson process.  t In the latter, classical setup, Λj,t is the so-called integrated rate function, a deterministic function of the form Λj,t = 0 λj (s) ds, where the intensity λj : [0, ∞) → [0, ∞) is a deterministic, non-negative function; note the difference to the above setup where Λj,t is stochastic. Moreover, (2) is then the survival probability pj (t ) of component j until time t; note that this quantity is stochastic in the new setup above. The construction of the default time τj in the classical setup is done as above and is known as canonical construction; see [2, p. 227] or [24, p. 122].

320

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

(2) The intuition behind the choice in (4) is that the term structure of the survival probability is modeled through an individual deterministic part, via Mj , and through a stochastic part affecting all (or just several; see Section 5.1) components, via (Yt )t ∈[0,∞) . (3) Being the starting point of our construction, this model is specifically interesting to model default events. However, it is a general coupling scheme, and one could think of other applications. For instance, our dependence structure could also be used as a shock model in reliability engineering. The default times τj , j ∈ {1, . . . , d}, are naturally dependent due to the common stochastic process (Yt )t ∈[0,∞) . Our main goal is to investigate the survival copula of τ = (τ1 , . . . , τd )⊤ arising from this construction. We first focus on the case where U ∼ U [0, 1]d , that is, the dependence is solely induced by (Yt )t ∈[0,∞) . An extension to nested or hierarchical dependence structures, as well as dependent triggers Uj , j ∈ {1, . . . , d}, is discussed later; see Sections 5.1 and 5.2. 2.2. The joint survival function The following theorem presents the joint survival function S (t ) = P(τ1 > t1 , . . . , τd > td ),

t = (t1 , . . . , td )⊤ ∈ [0, ∞)d ,

of the random vector of default times τ = (τ1 , . . . , τd )⊤ . Here, and in what follows,

ψZ (x) = E[exp(−xZ )],

x ∈ [0, ∞),

denotes the Laplace–Stieltjes transform of the random variable Z at x. Theorem 2.2 (Joint Survival Function). For j ∈ {1, . . . , d}, let the survival process pj be given by (2) and (4), and let the corresponding default time τj be specified (3); that is, let

τj = inf{t ≥ 0 : exp(−(Mj (t ) + Yt )) ≤ Uj },

j ∈ {1, . . . , d}.

Then, the joint survival function of the vector τ of default times is given by S (t ) = Ω ∗ (t )

d 

Sj (tj ),

where Ω ∗ (t ) =

j =1

ψd

j=1 Ytj

d  j =1

(1)

,

(5)

ψYtj (1)

where Sj (t ) = P(τj > t ), t ∈ [0, ∞), j ∈ {1, . . . , d}, denote the marginal survival functions corresponding to S, given by Sj (t ) = ψMj (t )+Yt (1) = exp(−Mj (t ))ψYt (1),

t ∈ [0, ∞), j ∈ {1, . . . , d}.

(6)

One special subclass of processes (Yt )t ∈[0,∞) consists of those with independent increments. In this case one obtains the following result. Corollary 2.3 (Independent Increments). Assume the setup of Theorem 2.2 and let t0 = 0. If the stochastic process (Yt )t ∈[0,∞) has independent increments, then

Ω ∗ (t ) =

d ψ  Yt(j) −Yt(j−1) (d − j + 1) j =1

ψYt(j) (1)

,

where t(j) denotes the jth order statistic of tk , k ∈ {1, . . . , d}; that is, t1 ≤ · · · ≤ td . If, moreover, the increments are stationary, then

Ω ∗ (t ) =

d ψ  Yt(j) −t(j−1) (d − j + 1) j =1

ψYt(j) (1)

.

2.3. The implied dependence structure With the joint survival function of the vector of default times τ and the corresponding marginal survival functions at hand, one can derive the copula which provides a link between these two pieces of the multivariate default model. For the reader’s convenience, we introduce the abbreviations − u− j = Sj (uj ),

− u− (j) = S· (u· )(j) ,

− for j ∈ {1, . . . , d}, where u− (j) denotes the jth order statistic of Sj (uj ) (the generalized inverse of Sj (t ) = ψMj (t )+Yt (1) = exp(−Mj (t ))ψYt (1)), j ∈ {1, . . . , d}; that is, S·− (u· )(1) ≤ · · · ≤ S·− (u· )(d) .

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

321

Corollary 2.4 (Sibuya Copulas). Assume the setup of Theorem 2.2. Then, the copula C corresponding to the joint survival function (5) is given by − C (u) = S (u− 1 , . . . , ud ) = Ω (u)Π (u),

(7)

with

ψd Ω (u) = Ω (u1 , . . . , ud ) = ∗



j=1 Yu− j



d 

(1) .

(8)

ψY − (1)

j =1

u

j

If (Yt )t ∈[0,∞) has independent increments, then

Ω (u) =

(d − j + 1) d ψY − −Y −  u u (j) (j−1) j =1

ψY

u

− (j)

(1)

,

and if the increments are stationary, then

Ω (u) =

(d − j + 1) d ψY − −  u −u (j) (j−1) j =1

ψY

u

− (j)

(1)

.

Definition 2.5. A copula C is in Sibuya form if C is written in the form C (u) = Ω (u)Π (u), u ∈ [0, 1]d . The function Ω is referred to as a dependence function. A Sibuya copula is a copula in Sibuya form with Ω being as in (8). Remark 2.6. (1) Let us briefly explain why we use the above notation and naming. Copulas in Sibuya form in general and Sibuya copulas in particular are similar to a class of distributions introduced by Sibuya [25]. In his work, Sibuya considered a bivariate distribution function F (x, y) with marginals G(x) and H (y). He introduced (by definition) a function Ω ∗ via F (x, y) = Ω ∗ (x, y)G(x)H (y), and defined a dependence function Ω via F (x, y) = Ω (G(x), H (y))G(x)H (y). Due to the similarity with (5) and (7), we call copulas of Type (7) Sibuya copulas. As differences, we note th the following. • We consider survival functions instead of distribution functions. • We look at general d, not only the bivariate case. • We do not just define Ω ; it appears naturally from our construction in Section 2.1. (2) Sibuya copulas naturally write in Sibuya form, for the dependence function Ω being given as in (8). Eyraud–Farlie– Gumbel–Morgenstern copulas (see [13, Formula (1.21) p. 19]) are also naturally in Sibuya form. It is not clear, however, if they are also Sibuya copulas; that is, if their dependence function Ω can be written as (8) in terms of an admissible stochastic process (Yt )t ∈[0,∞) . (3) It is important to note that the process (Yt )t ∈[0,∞) and the individual functions Mj , j ∈ {1, . . . , d}, enter the copula as parameter functions. If these parameter functions are chosen, the copula can, of course, be fed with any univariate (survival) distribution functions to obtain a joint (survival) distribution function via Sklar’s Theorem. Clearly, in the specific case where the survival functions plugged in are precisely those given by (6), the associated joint survival distribution is given by (5). 3. Properties of Sibuya copulas In this section, we present various properties of Sibuya copulas. Due to their quite general functional form, Sibuya copulas can have a variety of different properties. This variety also becomes apparent by noting that every right-continuous increasing stochastic process (Yt )t ∈[0,∞) starting at zero generates a set of Sibuya copulas (with (Yt )t ∈[0,∞) fixed, the individual functions Mj , j ∈ {1, . . . , d}, constitute the remaining ‘‘degrees of freedom’’). We will frequently investigate the symmetric representative of this set; that is, we will frequently assume Mj ≡ M ,

j ∈ {1, . . . , d},

(S)

where M : [0, ∞) → [0, ∞) is right-continuous, increasing, and 0 at zero. We sometimes also consider the case where (Yt )t ∈[0,∞) is a subordinator; that is, (Yt )t ∈[0,∞) is an almost surely increasing Lévy process. In this case, we have

ψYt (x) = E[exp(−xYt )] = exp(−t Ψ (x)),

t ∈ [0, ∞),

(L)

322

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

where Ψ denotes the corresponding Laplace exponent; Ψ is 0 at zero, increasing, and can be shown to satisfy

Ψ (x) ≤ xΨ (1),

x ∈ [0, ∞).

(9)

3.1. Limiting cases In this section, we show that both the product copula C = Π and the upper Fréchet–Hoeffding bound copula C = M are Sibuya copulas. It will then follow from Section 4.3 that Sibuya copulas cover the whole range of dependence between Π and M measured in terms of concordance. Recall that in order to show that Π and M are Sibuya copulas, it is not enough to extract the dependence function given by the Sibuya form; we also need to show that the latter takes on the form as given in (8). It is clear from (7) that C = Π if Ω (u) = 1 for all u ∈ [0, 1]d . This is equivalent to Ω ∗ (t ) = 1 for all t ∈ [0, ∞)d . By (5), this happens if

ψd

j=1 Ytj

(1) =

d 

 ψYtj (1) or E

d 

 exp(−Ytj )

=

 E exp(−Ytj ) ;

j=1

j=1

j =1

 d 

that is, if the random variables exp(−Ytj ), j ∈ {1, . . . , d}, are independent. This holds if we choose (Yt )t ∈[0,∞) as any deterministic, right-continuous, and increasing function with Y0 = 0 (take Yt ≡ 0, for example). Hence, the independence copula Π is a Sibuya copula. To see that the upper Fréchet–Hoeffding bound M is also a Sibuya copula, we let Mj ≡ 0 for all j ∈ {1, . . . , d}. This implies that Sj (t ) = ψYt (1) for all j ∈ {1, . . . , d}. Furthermore, let (Yt )t ∈[0,∞) be the killed process Yt = ∞1[V ,∞) (t ), t ∈ [0, ∞), where V > 0 is a random variable with distribution function F ; that is, (Yt )t ∈[0,∞) is a generalized increasing process in the sense of [12, p. 193]. In this case, ψYt (1) = E[exp(−Yt )] = 0 · P(V ≤ t ) + 1 · P(V > t ) = 1 − F (t ) =: F¯ (t ), and similarly, ψd Yt (1) = F¯ (max{t1 , . . . , td }). This implies that j=1

j

Ω ∗ (t ) =

F¯ (max{t1 , . . . , td }) d 

.

F¯ (tj )

j =1

Moreover, we have Sj (t ) = ψYt (1) = F¯ (t ), j ∈ {1, . . . , d}, and thus

Ω (u) =

F¯ (max{F¯ − (u1 ), . . . , F¯ − (ud )}) d 

=

min{u1 , . . . , ud }

Π (u)

F¯ (F¯ − (uj ))

=

M (u) . Π (u)

j =1

Therefore, C ≡ M in this setup. Another way of looking at limits is taking margins. It is easy to see that if C (u) = Ω (u)Π (u) is a d-dimensional Sibuya copula, ∅ ̸= J ⊆ {1, . . . , d}, and uJ denotes the corresponding |J |-dimensional subvector of u with the components of u with index in J picked out, then the corresponding J-dimensional marginal copula C J of C is C J (uJ ) = Ω (uJ )Π (uJ ),  ∗ where Ω (uJ ) = Ω ∗ ((u− j )j∈J ) with Ω (tJ ) = ψ j∈J Yt (1)/ j



j∈J

ψYtj (1).

3.2. Symmetry One of the nice features of Sibuya copulas is the separation of the joint process (Yt )t ∈[0,∞) (responsible for creating dependence) and the individual components Mj , j ∈ {1, . . . , d}; see (4). This allows us to control the symmetry of a Sibuya copula C : C is symmetric (or exchangeable) if and only if the functions Mj , j ∈ {1, . . . , d}, are all equal; that is, if and only if (S) holds. In particular, C is symmetric if all Mj are zero everywhere. 3.3. Sharp bounds on Ω ∗ (t , . . . , t ) and limits We now investigate bounds on the diagonal of Ω ∗ and their implications. By considering Ω ∗ (t , . . . , t ), t ∈ [0, ∞) we obtain the following bounds:

E[exp(−Yt )] E[exp(−dYt )] = Ω ∗ (t , . . . , t ) ≤ = (E[exp(−Yt )])1−d = ψYt (1)1−d , (10) (E[exp(−Yt )])d (E[exp(−Yt )])d where the first inequality is an application of Jensen’s inequality to the convex function x → xd . As we have seen in Section 3.1, the lower bound is sharp and can be achieved by taking (Yt )t ∈[0,∞) to be deterministic. The upper bound is also sharp. To see this, take Mj ≡ 0 for all j ∈ {1, . . . , d} and note that due to the Fréchet–Hoeffding upper bound 1≤

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

323

Ω (u, . . . , u) ≤ min{u, . . . , u}/Π (u, . . . , u) = u1−d (which is sharp, by Section 3.1), so we obtain the sharp upper bound Ω ∗ (t , . . . , t ) = Ω (S (t ), . . . , S (t )) ≤ S (t )1−d = ψYt (1)1−d . It follows from (10) that lim Ω ∗ (t , . . . , t ) = 1. t ↓0

Under (L), we obtain from (9) that

 lim Ω (t , . . . , t ) = lim

exp(−Ψ (d))



t →∞

t →0

exp(−dΨ (1))

t

 =

1,

Ψ (d) = dΨ (1), Ψ (d) < dΨ (1).

∞,

3.4. Concordance One question is if it is possible to attain strict negative concordance; that is, if there is a Sibuya copula C such that

Ω (u) < 1 for all u ∈ [0, 1]d . As we will now show, at least in the symmetric case (S) where all Mj , j ∈ {1, . . . , d}, are equal, there is no such model. To this end, we will look at the diagonal of Ω and show that it is always greater than or equal − to one in the symmetric case. Since we are in the symmetric case, u− j = S (uj ), j ∈ {1, . . . , d}, where S ≡ S1 . This implies − u(j) = S − (u[j] ), where u[j] denotes the jth largest value of uk , k ∈ {1, . . . , d}. Now since we look at the diagonal of C only, all the u− (j) are equal; let us denote their value by t. We thus obtain from (10) that

Ω (u) = Ω ∗ (t , . . . , t ) ≥ 1. Under (L), one can even show positive association of U ∼ C (equivalently, C ); see [5] for a definition and properties of this notion. First, note that, due to independent increments, finite projections of subordinators can be written as partial sums of independent random variables and are thus positively associated. By Lindqvist [18], subordinators are thus positively associated. As decreasing functionals in (Yt )t ∈[0,∞) , it follows from (2) that (p1 (t ), . . . , pd (t ))⊤ is positively associated. By (3), τ admits the stochastic representation (E1−1 Λ1,t1 , . . . , Ed−1 Λd,td )⊤ for independent and identically distributed unit exponentials E1 , . . . , Ed . P2 in [5] thus implies that also τ is positively associated and so is its survival copula C . Finally, let us remark that it follows from [14, p. 27] that C is positive lower and upper orthant dependent. 3.5. Tail dependence, extremal dependence For Xj ∼ Fj , j ∈ {1, 2}, with joint copula C , the lower and upper tail-dependence coefficients λL and λU , respectively, are given by

λL = lim P(X2 ≤ F2− (u) | X1 ≤ F1− (u)) = lim u↓0

u ↓0

λU = lim P(X2 > F2− (u) | X1 > F1− (u)) = lim u↑1

u ↑1

C (u, u) u

,

1 − 2u + C (u, u) 1−u

= lim u↓0

Cˆ (u, u) u

,

where Cˆ denotes the survival copula corresponding to C and the limits are assumed to exist. By definition, the lower (upper) tail-dependence coefficient tells us the probability, in the limit, that one variable is small (large) given that the other is. Let Xj ∼ Fj , j ∈ {1, . . . , d}, with joint copula C . There are several ways to extend the notion of tail dependence to the general d-dimensional case; see [13, p. 228]. One way is to consider the multivariate tail-dependence coefficients

λL,J = lim P(Xk ≤ Fk− (u) for all k ̸∈ J | Xj ≤ Fj− (u) for all j ∈ J ) = lim u↓0

u↓0

λU ,J = lim P(Xk > Fk− (u) for all k ̸∈ J | Xj > Fj− (u) for all j ∈ J ) = lim u ↑1

u↓0

C (u, . . . , u) C J (u, . . . , u)

,

Cˆ (u, . . . , u) , ˆC J (u, . . . , u)

for all ∅ ̸= J ( {1, . . . , d}, where C J (Cˆ J ) denotes the marginal copula of C (Cˆ ) for all components being one except those in J. C is said to be lower (upper) tail dependent if λL,J > 0 (λU ,J > 0) for some J. The notion of extremal dependence was introduced by Frahm [6]. For Xj ∼ Fj , j ∈ {1, . . . , d}, with joint copula C , the lower and upper extremal-dependence coefficients εL and εU , respectively, are given by

   C ( u, . . . , u)  εL := lim P max {Fj (Xj )} ≤ u min {Fj (Xj )} ≤ u = lim , 1≤j≤d u↓0 u↓0 1 − C 1≤j≤d ˆ ( 1 − u, . . . , 1 − u)     Cˆ (1 − u, . . . , 1 − u) , εU := lim P min {Fj (Xj )} > u max {Fj (Xj )} > u = lim 1≤j≤d u ↑1 u ↑1 1≤j≤d 1 − C (u, . . . , u) 

324

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

where Cˆ denotes the survival copula of C and the limits are assumed to exist. By definition, the lower (upper) extremaldependence coefficient tells us the probability, in the limit, that the largest (smallest) value of Fj (Xj ), j ∈ {1, . . . , d}, is small (large) given that the smallest (largest) value is. To compute the quantities λL , λU , λL,J , λU ,J , εL , and εU , we see that they involve the copula diagonal and the diagonal of the survival copula (possibly of a lower-dimensional margin). In the symmetric case, they can be computed as follows. Proposition 3.1. Let C be a d-dimensional Sibuya copula and assume (S) to hold (with S (t ) = ψM (t )+Yt (1) = exp(−M (t ))

ψYt (1)). Let u− = S − (u). Then

ψY − (d)

(1) (i) C (u, . . . , u) = ud ψ u (1)d ; Y − u

(ii) Cˆ (1 − u, . . . , 1 − u) =

 

d

d k

k =0

ψY − (k) u k. Y − (1)

(−u)k ψ

u

(2) If, additionally, (L) holds, then   (i) C (u, . . . , u) = ud exp u− (dΨ (1) − Ψ (d)) ; (ii) Cˆ (1 − u, . . . , 1 − u) =

 

d

d k

k=0

  (−u)k exp u− (kΨ (1) − Ψ (k)) .

Remark 3.2. (1) Proposition 3.1 allows one to find formulas for the tail- and extremal-dependence coefficients addressed above in the symmetric case (S). As an example where the calculations lead to explicit formulas, we consider M (t ) = µt ,

µ ≥ 0.

(M)

In this case, a tedious rather than complicated calculation shows that the tail-dependence coefficients are given by

λL = 0,

λU =

2Ψ (1) − Ψ (2)

µ + Ψ (1)

,

the multivariate tail-dependence coefficients by d    d

λL,J = 0,

k=1

λU ,J =

k

|J |    |J | k =1

k

(−1)k−1 (kµ + Ψ (k)) , (−1)

k−1

(kµ + Ψ (k))

for all ∅ ̸= J ( {1, . . . , d}, and the extremal-dependence coefficients by

εL = 0,

εU =

d    d k =1

k

(−1)k−1

kµ + Ψ (k) dµ + Ψ (d)

.

Note that by the Lévy–Khintchine formula for subordinators (see, for example, [1, p. 72]), these quantities can also be expressed in terms of the underlying Lévy measure. (2) Assumptions (L) and (M) are convenient in that they allow one to explicitly express u− = S − (u) and therefore the tailand extremal-dependence coefficients based on Proposition 3.1. Without an explicit inverse S − it may still be possible to find bounds for the corresponding quantity of interest. For example, for the lower tail-dependence coefficient under (only) (S), an application of (10) leads to

  λL = lim uΩ (u, u) = lim S (t )Ω ∗ (t , t ) ≤ exp −M (S − (0)) . u↓0

t ↑S − (0)

If Yt < ∞ and M (t ) < ∞ for all t ∈ [0, ∞), then S − (0) = ∞; hence λL ≤ exp(− limt ↑∞ M (t )). If M (t ) → ∞ for t → ∞, then λL = 0. 3.6. Sampling The intuitive construction principle of Sibuya copulas allows for the following general sampling algorithm. Algorithm 3.3. (1) (2) (3) (4) (5)

sample U ∼ U [0, 1]d sample a path of (Yt )t ∈[0,T ] , where T = inf{t ≥ 0 : exp(−Mj (t ) − Yt ) ≤ Uj , j ∈ {1, . . . , d}} for j ∈ {1, . . . , d}, find τj = inf{t ≥ 0 : exp(−Mj (t ) − Yt ) ≤ Uj } return (S1 (τ1 ), . . . , Sd (τd ))⊤ .

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

325

The following sampling algorithm describes the procedure for drawing vectors of random variates from C if (Yt )t ∈[0,∞) is a pure jump process. Note that the number of occurrences to be sampled from the jump process depends on the given trigger variates Uj , j ∈ {1, . . . , d}, as well as on the deterministic functions Mj , j ∈ {1, . . . , d}. Algorithm 3.4. (1) (2) (3) (4) (5) (6) (7)

sample U ∼ U [0, 1d ] t0 := 0, k := 1, and Jk := {1, . . . , d} repeat { sample the kth occurrence tk of the jump process (Yt )t ∈[0,∞) find J ⊆ Jk : j ∈ J ⇔ Uj ≥ pj (tk ) = exp(−(Mj (tk ) + Ytk )) for j ∈ J { # find τj for all j ∈ J if Uj ≤ pj (tk −) = exp(−(Mj (tk ) + Ytk−1 )) τj := tk





(8) (9) (10) (11) (12) (13) (14)

else find τj on (tk−1 , tk ) via τj := Mj− (− log Uj − Ytk−1 ) # Uj ∈ (pj (tk −), pj (tk−1 )) } Jk+1 := Jk \ J # indices j for which τj have not been determined yet if (Jk+1 = ∅) break else k := k + 1 } return (S1 (τ1 ), . . . , Sd (τd ))⊤ .

4. Special subclasses of Sibuya copulas and their properties In this section, we briefly present several subclasses of Sibuya copulas. In particular, we consider a pure jump process based on a non-homogeneous (Section 4.1) and homogeneous (Section 4.2) Poisson process and a pure linear process for (Yt )t ∈[0,∞) (Section 4.3). Among various properties, the first two subclasses are able to flexibly model singular components and the latter subclass will lead to an asymmetric generalization of Archimedean copulas. 4.1. Non-homogeneous Poisson Sibuya copulas First, we consider the case of a pure jump process for (Yt )t ∈[0,∞) based on a non-homogeneous Poisson process. Our choice in the following proposition corresponds to the default model of [11] where the jump size is set constant and equal to H. This specific case has been worked out in more detail by Vrins [26], who showed for instance that the analytical expression of the joint survival distribution can then be derived. The corresponding coupling function is a non-homogeneous Poisson Sibuya copula, a class of copulas we now introduce. Proposition 4.1 (Non-homogeneous Poisson Sibuya Copulas). Let Yt = HNt ,

t ∈ [0, ∞),

(NP)

for a non-homogeneous Poisson process (Nt )t ∈[0,∞) with N0 = 0,

Nt ∼ Poi (Λ(t )),

and Λ(t ) =

t



λ(s) ds 0

for a deterministic, non-negative function λ : [0, ∞) → [0, ∞) such that Λ(0) = limt ↓0 Λ(t ) = 0 and where H ≥ 0 is a constant. Under (NP), the Sibuya copula (7) becomes

 C (u) = exp (1 − e

−H

 d  − −H (d−j) ) (1 − e )Λ(u(j) ) Π (u),

(11)

j=1

where Sj (t ) = exp −(Mj (t )+ Λ(t )(1 − e−H )) , j ∈ {1, . . . , d}. We refer to (11) as a non-homogeneous Poisson Sibuya copula.





Remark 4.2. Under (NP), we have the following remarks. (1) It is trivial to see that Ω (u) ≥ 1 for all u ∈ [0, 1]d ; hence C is positive lower orthant dependent. (2) In the bivariate case, a non-homogeneous Poisson Sibuya copula C can be written as C (u1 , u2 ) = exp((1 − e−H )2 Λ(u− (1) ))u1 u2

= exp((1 − e−H )2 Λ(min{S1− (u1 ), S2− (u2 )}))u1 u2 .

(12)

From this functional form one may derive that C allows for a flexible singular component, given by {u ∈ [0, 1]2 : S1− (u1 ) = S2− (u2 )}.

326

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

(3) Let a = 1 − e−H and note that Sj (t ) ≤ exp(−aΛ(t )) so that S − (u) ≤ Λ− (− log(u)/a), j ∈ {1, 2}. This implies exp a2 Λ(min{S1− (u), S2− (u)}) ≤ exp a2 Λ(S1− (u)) ≤ exp a2 Λ(Λ− (− log(u)/a))













= u− a , so that 0 ≤ λL = lim u exp(a2 Λ(min{S1− (u), S2− (u)})) ≤ lim u1−a = 0; u↓0

u ↓0

that is, λL = 0. (4) Since one has Ytk = Hk, k ∈ {1, 2, . . .}, one can apply Algorithm 3.4. Note that Step (4) can be achieved with the following algorithm (see, for example, [3, p. 257]), where th,0 = 0. Algorithm 4.3. (1) sample E ∼ Exp (1) (2) th,k := th,k−1 + E # kth occurrence of a homogeneous Poisson (3) # process with unit intensity (4) tk := Λ− (th,k ) # kth occurrence of a non-homogeneous Poisson (5) # process with integrated rate function Λ. Example 4.4 (Non-homogeneous Poisson Sibuya Copulas). Let us first consider a bivariate non-homogeneous Poisson Sibuya copula C as given in (12). Let the ‘‘intensities’’ λ and µj , j ∈ {1, . . . , d}, be linear instead of constant; that is, let

λ(s) = aλ s + bλ ,

µj (s) = aj s + bj ,

j ∈ {1, . . . , d},

where aλ , bλ , aj , bj ∈ [0, ∞), j ∈ {1, . . . , d}. Further, let us assume the non-trivial case where not both aλ (aj ) and bλ (bj ) are zero simultaneously. Letting c = (1 − e−H ), we obtain

Λ(t ) = aλ t 2 /2 + bλ t ,

Mj (t ) = aj t 2 /2 + bj t ,

pj (t ) = exp(−(aj t 2 /2 + bj t + HNt )), Sj (t ) = exp(−((aj + caλ )t 2 /2 + (bj + cbλ )t )) for j ∈ {1, . . . , d}. The corresponding inverses are

  Λ− (t ) = t /bλ 1{aλ =0} + b2 + 2at − b /a1{aλ >0} ,   b2 + 2at − b /a1{aj >0} , Mj− (t ) = t /bj 1{aj =0} +  (bj + cbλ )2 − 2(aj + caλ ) log(u) − (bj + cbλ ) − . Sj (u) = aj + caλ Note that if tk denotes the kth occurrence of the non-homogeneous Poisson (Nt )t ∈[0,∞) process with integrated rate function

Λ, then

pj (tk ) = exp(−(aj tk2 /2 + bj tk + Hk)), pj (tk −) = exp(−(aj tk2 /2 + bj tk + H (k − 1))). With these quantities one can evaluate the copula C and apply Algorithm 3.4 with Algorithm 4.3 for drawing vectors of random variates from C . Figs. 1 and 2 show two examples. Note that although we choose linear intensities, the resulting structures are quite different; for example, see the shape of the singular components. One may again infer that these copulas are able to capture highly asymmetric structures. Example 4.5 (Swiss Alps Copulas). Another example which shows the flexibility and possible asymmetry of nonhomogeneous Poisson Sibuya copulas is the following, more exotic, one. Assume

λ(s) = 1[1,∞) (s)/s,

µ1 (s) = a,

µ2 (s) = 12N (⌊s⌋),

where a ∈ [0, ∞). This implies

Λ(t ) = max{log t , 0}, Λ− (t ) = exp(t )1(0,∞) (t ),

M1 (t ) = at ,

M2 (t ) =

M1− (t ) = t /a,



⌈t ⌉/2, t − (⌈t ⌉ − 1)/2,

⌈t ⌉ ∈ 2N, ⌈t ⌉ ∈ 2N − 1,

M2− (t ) = (t + ⌈t ⌉ − 1)1(0,∞) (t ).

From these ingredients one can infer that Sj (t ) = exp(−Mj (t )) min{1, t −c }, j ∈ {1, 2}, where c = 1 − e−H . Fig. 3 shows the copula including the singular component and a scatter plot with H = 10 and a = 1/100. Even under (NP), the ‘‘freedom’’ in choosing Sj , j ∈ {1, 2}, allows us to construct copulas with such flexible curves as singular components (which resemble a path on top of the Swiss Alps).

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

327

Fig. 1. Non-homogeneous Poisson Sibuya copula (12) with singular component for Example 4.4 with H = 10, aλ = 0.1, bλ = 4, a = (a1 , a2 )⊤ = (1, 100)⊤ , and b = (b1 , b2 )⊤ = (5, 0)⊤ (left). 1000 generated vectors of random variates from this copula (right). The lower and upper tail-dependence coefficients are given by λL = 0 and λU = 0.44, respectively.

Fig. 2. Non-homogeneous Poisson Sibuya copula (12) with singular component for Example 4.4 with H = 10, aλ = 0.1, bλ = 4, a = (a1 , a2 )⊤ = (1, 100)⊤ , and b = (b1 , b2 )⊤ = (0, 0)⊤ (left). 1000 generated vectors of random variates from this copula (right). The lower and upper tail-dependence coefficients are given by λL = 0 and λU = 1, respectively.

4.2. Homogeneous Poisson Sibuya copulas The following corollary addresses the homogeneous special case of Proposition 4.1. Corollary 4.6. If, in addition to (NP), one assumes a linear setting Mj (t ) = µj t ,

µ j ≥ 0,

and Λ(t ) = λt ,

λ ≥ 0,

(HP)

then we obtain homogeneous Poisson Sibuya copulas, given by C (u) = Π (u)

d 

−H )(1−e−H (d−j) )

· (1−e (u−λ/λ )(j) ·

j =1

where λj = µj + λ(1 − e−H ), j ∈ {1, . . . , d}.

,

(13)

328

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

Fig. 3. Swiss Alps copula with singular component for Example 4.5 with H = 10 and a = 1/100 (left). 1000 generated vectors of random variates from this copula (right).

Remark 4.7. Under (HP), we have the following remarks. (1) In the bivariate case, the homogeneous Poisson Sibuya copula (13) becomes −λ/λ1

C (u1 , u2 ) = min{u1

−λ/λ2 (1−e−H )2

, u2

}

1−θ1

u1 u2 = min{u1

1−θ2

u2 , u1 u2

},

(14)

where θj = (1 − e ) λ/λj ∈ [0, 1], that is, a Marshall–Olkin copula; see [4]. In this case, one has explicit formulas for Spearman’s rho and Kendall’s tau, see, for example, [4], for the tail-dependence coefficients [23, p. 215], as well as for the probability of falling on the singular component, see, for example, [23, p. 54]. (2) Homogeneous Poisson Sibuya copulas (13) are max-stable and therefore extreme-value copulas; see, for example, [23, p. 95]. In the symmetric case, that is, µj =: µ for all j ∈ {1, . . . , d}, (13) can be written as −H 2

C (u) = Π (u)

d 

−c (1−e−H (d−j) )

u[ j ]

j =1

=

d 

=

d  j =1

1−c (1−e−H (d−j) )

u(j)

u(j) ·

d 

−c (1−e−H (d−j) )

u[j]

j =1

,

j =1

where c = λ(1 − e−H )/(µ + λ(1 − e−H )) and the subscript [j] stands for the jth largest element. Thus, in the symmetric case, homogeneous Poisson Sibuya copulas are Lévy-frailty copulas. In general, this is not true since, for example, Lévyfrailty copulas are restricted to have the diagonal as pairwise singular components. (3) For sampling homogeneous Poisson Sibuya copulas, Step (8) of Algorithm 3.4 boils down to setting τj = (− log Uj − H (k − 1))/µj and Step (4) of Algorithm 4.3 to tk = th,k /λ. Since, under (HP), (Yt )t ∈[0,∞) is a Lévy process, Remark 3.2(1) provides the multivariate tail-dependence and the extremal-dependence coefficients in the symmetric case. For (Yt )t ∈[0,∞) under (HP), one can extend these results to the asymmetric case as shown in the following two propositions. Proposition 4.8. Assume (HP) to hold. Then the following hold. (1) The multivariate tail-dependence coefficients are given by d    d

λL,J = 0,

λU ,J =

|J |    |J | i =1

where ∅ ̸= J ( {1, . . . , d}.

k

k=1

i

 (−1)

k

k − λ(1 − e

−H

)

d 

 (1 − e

−H (d−j)

)/λ[j]

j=d−k+1

 (−1)i i − λ(1 − e−H )

|J |  k=|J |−i+1

(1 − e−H (d−jk ) )/λ[jk ]

,

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

329

(2) The lower and upper extremal-dependence coefficients are given by d

εL = 0,

εU =

  d k

k=1

d 

k − λ(1 − e−H )

(1 − e−H (d−j) )/λ[j]

j=d−k+1

(−1)

k−1

d − λ(1 −

e −H )

d 

(1 −

, e−H (d−j) )/λ[j]

j =1

respectively. 4.3. Archimedean Sibuya copulas Now we consider the case where (Yt )t ∈[0,∞) is a randomly scaled function. Proposition 4.9 (Archimedean Sibuya Copulas). Let Yt = Vf (t ),

t ∈ [0, ∞),

(V)

for an almost surely positive random variable V ∼ F and an increasing function f : [0, ∞) → [0, ∞) with f (0) = 0. Then we obtain Archimedean Sibuya copulas, given by

 ψV C ( u) =

d 

 f (Sj− (uj ))

j=1 d 

Π (u),

(15)

  ψV f (Sj− (uj ))

j =1

where Sj (t ) = ψMj (t )+Vf (t ) (1) = exp(−Mj (t ))ψV (f (t )), j ∈ {1, . . . , d}. Remark 4.10. (1) If f (t ) = t and Mj ≡ 0, j ∈ {1, . . . , d}, then C (u) = ψV (ψV−1 (u1 ) + · · · + ψV−1 (ud )), that is, the Archimedean copula generated by ψV . We see that (15) provides a new generalization of Archimedean copulas. This generalization allows for non-exchangeability, that is, asymmetry. Note that there has recently been quite a lot of interest in constructing such generalizations of Archimedean copulas, via Liouville copulas (see, for example, [22]), or nested Archimedean copulas (see, for example, [20] or [8]). Sibuya copulas as given in (15) naturally allow for asymmetries through the functions Mj , j ∈ {1, . . . , d}, and therefore more realistic dependence structures, especially in large dimensions. (2) Archimedean Sibuya copulas can be sampled as follows. Algorithm 4.11. (1) sample U ∼ U [0, 1d ] (2) sample V ∼ F = LS −1 [ψV ] (3) for j ∈ {1, . . . , d}, find τj = inf{t ≥ 0 : exp(−Mj (t ) − Vf (t )) ≤ Uj } (4) # note that τj ∈ [0, f − ((− log Uj )/V )] (5) return (S1 (τ1 ), . . . , Sd (τd ))⊤ , where Sj (t ) = exp(−Mj (t ))ψV (f (t )), j ∈ {1, . . . , d}. Step (2) of Algorithm 4.11 is easily achieved for many known Archimedean generators; see, for example, [8, p. 62]. This is the same step required as in the famous algorithm of [19]. Typically, the efficiency of Algorithm 4.11 depends on Step (3), that is, on the functional forms of Mj , j ∈ {1, . . . , d}, and f . In what follows, we consider f to be the identity, that is, Yt = Vt ,

t ∈ [0, ∞),

so (Yt )t ∈[0,∞) is a line with random slope. Example 4.12 (Khoudraji-Transformed Archimedean Copulas). Let f (t ) = t , t ∈ [0, ∞). In this example, we consider the Mj 1/αj −1

to be of the form − log ψV (t ), so that ψV (Sj− (t )) = t αj . In this case, Archimedean Sibuya copulas take on a certain (possibly asymmetric) transformation of the underlying Archimedean copulas, known as Khoudraji’s device; see [15] or [7]. (1) If ψV (t ) = ψθ (t ) = (1 + t )−1/θ , θ ∈ (0, ∞), denotes a generator of the Archimedean Clayton copula and Mj (t ) = 1/αj −1

− log ψθ

(t ) = − log ψθαj /(1−αj ) (t ) = (1 − αj ) log(1 + t )/(θ αj ), t ∈ [0, ∞), for αj ∈ (0, 1), then α

α

1−α1

C (u) = Cθ (u1 1 , . . . , ud d ) Π (u1

1−αd

, . . . , ud

),

(16)

where Cθ denotes Clayton’s copula generated by ψθ ; the limiting cases C0 (independence copula) and C∞ (upper Fréchet bound) as well as αj ∈ {0, 1} can be included. Therefore, by our choice of parameter (functions), we obtain a Khoudraji

330

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

Fig. 4. 1000 realizations of a random vector following a Khoudraji transformed Clayton copula (left) with θ = 8 and (α1 , . . . , α4 )⊤ = (0.05, 0.5, 0.9, 0.99)⊤ , and a Khoudraji transformed Gumbel copula (right) with θ = 5 and (α1 , . . . , α4 )⊤ = (0.99, 0.9, 0.5, 0.3)⊤ .

transformed Clayton copula. This copula allows for asymmetries, as long as at least two of the αj are distinct. A basic calculation shows that for bivariate copulas of this form, λL = λU = 0, so it is tail independent (in contrast to Clayton copulas for which λL > 0 = λU ). (2) If ψV (t ) = ψθ = exp(−t 1/θ ), θ ∈ [1, ∞), denotes a generator of the Archimedean Gumbel copula and Mj (t ) = 1/αj −1

(t ) = (1/αj − 1)t 1/θ , t ∈ [0, ∞), for αj ∈ (0, 1], then the Sibuya copula C again takes the form (16), with Cθ being Gumbel’s copula with generator ψθ ; the limiting cases C∞ (upper Fréchet bound) as well as αj = 0 can be included. Note that the function ga,b (x) = (ax + bx )1/x , a, b ∈ (0, 1), x ∈ [1, ∞), has the following properties: ga,b is positive, ga,b is decreasing on [1, ∞), ga,b (1) = a + b, and limx→∞ ga,b (x) = max{a, b}. With this result, one can show that the Khoudraji transformed Gumbel copula has λL = 0 and λU = α1 + α2 − (α1θ + α2θ )1/θ . Note that λU ∈ [0, min{α1 , α2 }), with λU = 0 if θ = 1 and λU → min{α1 , α2 } for θ → ∞. − log ψθ

Fig. 4 shows 1000 realizations of a random vector following Khoudraji transformed Clayton and Gumbel copulas with Kendall’s tau and tail-dependence coefficients as given. Note that both the tail-dependence parameters and Kendall’s tau are bounded above by the corresponding quantities of the Archimedean copulas, as shown in [17]. A random vector U following a Khoudraji transformed copula allows for a stochastic representation (and thus a sampling algorithm), given by U = (U1 , . . . , Ud )⊤

with Uj = max Uj′



1/αj

, Uj′′

1/(1−αj )

 ,

where U ′ ∼ Cψθ and U ′′ ∼ U [0, 1]d ; see [21, p. 225]. Note that Uj′ = ψθ (Rj /V ),

j ∈ {1, . . . , d},

(17)

−1

where V ∼ Fθ = LS [ψθ ] and the Rj are independent standard exponentials. Another nice feature of Khoudraji transformed Archimedean copulas is that their density can be derived explicitly as c (u) =

 J ⊆{1,...,d}

(|J |)

ψV



d 

αj



ψV (uj ) −1

j=1

 j∈J

α

αj (ψV−1 )′ (uj j )



−αj

(1 − αj )uj

,

(18)

j̸∈J

(k)

where ψV denotes the kth derivative of the Archimedean generator ψV . These derivatives were recently obtained in explicit form by Hofert et al. [10] and are implemented in the R package copula; see also [9]. Example 4.13 (Archimedean Sibuya Copulas Under (Mj )). Let f (t ) = t , t ∈ [0, ∞). In this example, we consider Mj (t ) = µj t ,

j ∈ {1, . . . , d},

that is, linear Mj . This implies that Sj (t ) = exp(−µj t )ψV (t ) = ψµj +V (t )

( Mj )

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

331

so that

τj = inf{t ≥ 0 : exp(−(µj + V )t ) ≤ Uj } =

− log Uj , µj + V

j ∈ {1, . . . , d}.

Therefore, Archimedean Sibuya copulas under (Mj ) allow for the stochastic representation

 U = (U1 , . . . , Ud )



with Uj = ψµj +V



Rj

µj + V

,

where Rj ∼ Exp (1), j ∈ {1, . . . , d}, independent of each other. This stochastic representation is similar to those of [19]; see (17). However, U is not restricted to exchangeability. Archimedean Sibuya copulas under (Mj ) are given by

 ψV

d  j=1

C ( u) =

d

 ψµ−j1+V (uj ) Π (u),

ψV (ψµ−j1+V (uj ))

 j =1

and thus have a density, given by c (u) =

d 



ψV (ψµ−j1+V (uj )) J ⊆{1,...,d}

j =1

·

1

(|J |)

ψV

uj

j∈J

ψµ′ j +V (ψµ−j1+V (uj ))

d 

 ψµ−j1+V (uj )

j =1

 





1−

j̸∈J

uj ψV′ (ψµ−j1+V (uj ))



ψµ′ j +V (ψµ−j1+V (uj ))ψV (ψµ−j1+V (uj ))

.

(19)

For the lower tail-dependence coefficient of the corresponding bivariate copula C , note that

  ψV t + S2− (S1 (t )) S1 (t )2 1   λL = lim = lim t →∞ t →∞ S1 ( t ) ψV (t )ψV S2− (S1 (t )) S1 (t )   ψV t + S2− (S1 (t ))   exp(−µ1 t ) ≤ lim exp(−µ1 t ). = lim t →∞ t →∞ ψV S2− (S1 (t )) C (S1 (t ), S1 (t ))

Similarly,

λL = lim

C (S2 (t ), S2 (t ))

≤ lim exp(−µ2 t ).

S2 (t )

t →∞

t →∞

This implies that

λL ≤ exp(− max{µ1 , µ2 }t ), and thus λL = 0 if µ1 > 0 or µ2 > 0, that is, in the case where C is not Archimedean. Clearly, if µ1 = µ2 = 0, then C is the Archimedean copula generated by ψV (from which the tail-dependence coefficients are typically straightforward to derive). Concerning the upper tail-dependence coefficient, one obtains d dt

C (t , t ) = 2

C (t , t ) t

 −

2

t

ψV (ψµ−11+V (t ))ψV (ψµ−21+V (t ))

h(t ),

where h(t ) = ψV ψV (ψµ−11+V (t )) + ψV (ψµ−21+V (t )) − ψV (ψµ−11+V (t ))ψV (ψµ−21+V (t ))







 · ψV (ψµ−11+V (t ) ′

+

ψµ−21+V (t ))



1

ψµ′ 1 +V (ψµ−11+V (t ))

+

1

ψµ′ 2 +V (ψµ−21+V (t ))

 .

If h(1−) = limt ↑1 h(t ) exists, then

λU = 2 − lim t ↑1

d dt

C (t , t ) = h(1−).

If ψV′ (0+) > −∞, then ψµ′ j +V (0+) > −∞, so that λU = 0. This is the case, for example, if ψV is a generator of the Clayton copula.

332

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

Fig. 5. 1000 realizations of a random vector following a Clayton Sibuya copula under (Mj ) (left) with θ = 8 and (µ1 , . . . , µ4 )⊤ = (0, 0, 0.002, 0.02)⊤ , and a Gumbel Sibuya copula under (Mj ) (right) with θ = 5 and (µ1 , . . . , µ4 )⊤ = (0, 0, 1, 100)⊤ .

Fig. 5 shows 1000 realizations of a random vector following Clayton and Gumbel Sibuya copulas under (Mj ) with sample version of Kendall’s tau and tail-dependence coefficients as given (λU in the Gumbel case (determined by Maple) is equal to the upper tail-dependence coefficient of the Gumbel copula generated by ψV ). Note that in both scatter plots, µ1 = µ2 = 0, so that the pair (U1 , U2 )⊤ follows the corresponding Archimedean copula. 5. Generalizations A key feature of the model leading to Sibuya copulas is that it can easily be extended without affecting its tractability. Although several extensions are possible, we briefly address two generalizations of the copula construction. First, we replace the single process (Yt )t ∈[0,∞) by a hierarchy of processes (Yi,t )t ∈[0,∞) , i ∈ {1, . . . , I }. Then, we address the case of dependent trigger variables U . 5.1. Generalization to hierarchical processes The process (Yt )t ∈[0,∞) is fatal in the sense that it affects all components simultaneously. In practical applications, it might be the case that only certain subgroups are affected. Such a hierarchical behavior may be modeled via (possibly dependent) processes (Yi,t )t ∈[0,∞) for i ∈ {1, . . . , I }, where I is the number of subgroups. These subgroups often arise naturally from the application considered, for example, by given industry sectors, macroeconomic effects, geographical regions, political decisions, or consumer trends. A model incorporating such hierarchies can be constructed with the stochastic processes (Λj,t )t ∈[0,∞) , j ∈ {1, . . . , d}, in (4) being replaced by

Λij,t = Mij (t ) + Yi,t ,

t ∈ [0, ∞), i ∈ {1, . . . , I }, j ∈ {1, . . . , di }.

The corresponding individual survival processes are then given by pij (t ) = exp(−Λij,t ), t ∈ [0, ∞), and the default time of entity j in group i by τij = inf{t ≥ 0 : pij (t ) ≤ Uij }, where Uij ∼ U [0, 1], independent for all i ∈ {1, . . . , I }, j ∈ {1, . . . , di }. If the processes (Yi,t )t ∈[0,∞) , i ∈ {1, . . . , I }, are independent, the joint survival function S can be derived similarly as in the proof of Theorem 2.2. First, note that the individual survival functions Sij are given by Sij (t ) = ψMij (t )+Yi,t (1) = exp(−Mij (t ))ψYi,t (1), t ∈ [0, ∞), i ∈ {1, . . . , I }, j ∈ {1, . . . , di }. The joint survival function S can then be calculated as S (t11 , . . . , t1d1 , . . . , tI1 , . . . , tIdI ) =

di ψ I  (di − j + 1)  Yi,t −Yi,t i(j) i(j−1)

ψYi,ti(j) (1)

i=1 j=1

Sij (tij ),

where ti(j) denotes the jth smallest value of all components {ti1 , . . . , tidi } in sector i. The corresponding copula C is thus given by C (u) = Π (u)

(di − j + 1) di ψY − I  −Y −  i,S (ui· )(j) i,S (ui· )(j−1) i·

i=1 j=1



ψY

i,S

− (u ) i· i· (j)

(1)

,

which is a product of Sibuya copulas and thus itself in Sibuya form.

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

333

5.2. Generalization to dependent trigger variables A further source for introducing dependence is the default triggers. We now assume U ∼ CU ; that is, the trigger variables Uj , j ∈ {1, . . . , d}, are dependent according to the copula CU . Note that this does not influence the marginal survival functions Sj , j ∈ {1, . . . , d}, as given in (6). Redoing the calculations as carried out in the proof of Theorem 2.2 leads to the joint survival function S (t ) = E CU exp(−(M1 (t1 ) + Yt1 )), . . . , exp(−(Md (td ) + Ytd ))









= E CU

S1 (t1 )

ψYt1 (1)

exp(−Yt1 ), . . . ,





Sd (td )

ψYtd (1)

exp(−Ytd )

.

The corresponding copula C is therefore given by





C ( u) = E CU

exp(−YS − (u1 ) ) 1

ψY − S

(u ) 1 1

(1)

u1 , . . . ,

exp(−YS − (ud ) ) d

ψY − S

(ud ) d

(1)

 .

ud

In the symmetric case (S), we have Sj ≡ S1 , j ∈ {2, . . . , d}. As an example where the copula C is given explicitly, consider CU (u) = α M (u) + (1 − α)Π (u) for α ∈ [0, 1]; that is, CU (u) is a convex combination of the upper Fréchet bound copula M and the independence copula Π . In this case, a short calculation based on Theorem 2.2 leads to S (t ) = α min S1 (tj ) + (1 − α) j

d ψ  Yt(j) −Yt(j−1) (d − j + 1)

ψYt(j) (1)

j =1

S1 (tj ),

so that the copula corresponding to S is given by C (u) = α M (u) + (1 − α)Π (u)

(d − j + 1) d ψY − −Y −  S (u(d−j+2) ) S (u(d−j+1) ) 1

1

j =1

ψY −

S (u(d−j+1) ) 1

(1)

,

where ud+1 = 1. Since the upper Fréchet bound copula M is a Sibuya copula, C is a convex combination of Sibuya copulas. 6. Conclusion We have introduced and investigated a new class of Sibuya copulas which naturally takes on the form of the product of the independence copula with a dependence function. Its construction can be motivated by a generalization of the standard intensity-based default model. The marginal survival functions of the default times involved in the construction scheme appear as parameter functions in the copula. Sibuya copulas thus naturally allow for asymmetries. Furthermore, the intuitive construction allows for a straightforward sampling algorithm. Depending on the properties of the underlying increasing stochastic process, Sibuya copulas may have a singular component or may be absolutely continuous. We have provided new examples for each of these cases and also investigated their properties. We have seen that Sibuya copulas are quite flexible: they can be, for example, extreme-value copulas, Lévy-frailty copulas, Marshall–Olkin copulas, or Archimedean copulas, depending on the functional parameters chosen. Generalizations of Archimedean copulas to allow for non-exchangeability could be derived. Finally, we have shown that the dependence model easily extends in several ways, for example, to incorporate hierarchies. Acknowledgments The authors would like to thank Fabrizio Durante (Free University of Bozen-Bolzano), Paul Embrechts (ETH Zurich), Alexander J. McNeil (Heriot-Watt University), Johanna Nešlehová (McGill University), and two anonymous reviewers for giving us valuable feedback. The first author (Willis Research Fellow) thanks Willis Re for financial support while this work was being completed. Appendix Proof of Theorem 2.2. By conditioning on Yt , the jth marginal survival function Sj (t ) = P(τj > t ), t ∈ [0, ∞), can be computed via Sj (t ) = P(pj (t ) ≥ Uj ) = P(exp(−(Mj (t ) + Yt )) ≥ Uj )

= E[P(exp(−(Mj (t ) + Yt )) ≥ Uj | Yt )] = E[exp(−(Mj (t ) + Yt ))] = ψMj (t )+Yt (1) = exp(−Mj (t ))ψYt (1).

(A.1)

334

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

The joint survival function of the default times can be computed in a similar fashion via S (t ) = P(pj (tj ) ≥ Uj ,

j ∈ {1, . . . , d}) = P(exp(−(Mj (tj ) + Ytj )) ≥ Uj , j ∈ {1, . . . , d}) = E[P(exp(−(Mj (tj ) + Ytj )) ≥ Uj , j ∈ {1, . . . , d} | Ytj , j ∈ {1, . . . , d})]      d d d    Y tj exp(−Mj (tj )) =E exp(−(Mj (tj ) + Ytj )) = E exp − j=1

j =1

ψd =

j=1 Ytj

d

 j=1

d (1) 

ψYtj (1)

j=1

d

Sj (tj ) = Ω ∗ (t )

j =1



Sj (tj ),

j =1

where in the second-last equality, we used exp(−Mj (tj )) = Sj (tj )/ψYtj (1), which follows from (A.1).



Lemma A.1. Let ak ∈ [0, ∞), k ∈ {0, . . . , d}, d ∈ N, with a0 = 0. Further, let bk ∈ R, k ∈ {1, . . . , d}, such that bk+1 = cbk for c ∈ R and k ∈ {1, . . . , d − 1}. Then the following two statements hold.

d d ak = k=1 (d − k + 1)(a(k) − a(k−1) ). kd=1 d (2) k=1 (1 − bk )(a(k) − a(k−1) ) = (1 − cbd )a(d) + (c − 1) k=1 bk a(k) .

(1)

 −1  −1 d d (d − k + 1)a(k−1) = dk= (d − k)a(k) = dk= 1 (d − k)a(k) = k=1 (d − k)a(k) implies that k=1 d d 0  d d d (d−k+1)(a(k) −a(k−1) ) = k=1 (d−k+1)a(k) − k=1 (d−k+1)a(k−1) = k=1 (d−k+1)a(k) − k=1 (d−k)a(k) = k=1 a(k) = d d d d−1 d−1 d−1 k=1 ak . For Part (2), k=1 (a(k) − a(k−1) ) = a(d) and k=1 bk a(k−1) = k=0 bk+1 a(k) = k=1 bk+1 a(k) = c k=1 bk a(k) d d d  d imply that k=1 (1 − bk )(a(k) − a(k−1) ) = a(d) − k=1 bk (a(k) − a(k−1) ) = a(d) − k=1 bk a(k) + k=1 bk a(k−1) = a(d) − bd a(d)  −1 d−1 d−1 d − kd=  1 bk a(k) + c k=1 bk a(k) = (1 − bd )a(d) + (c − 1) k=1 bk a(k) = (1 − cbd )a(d) + (c − 1) k=1 bk a(k) . Proof. For Part (1),

d

k =1

Proof of Corollary 2.3. It follows from Lemma A.1(1) with aj = Ytj , j ∈ {1, . . . , d}, that

 ψd

j=1 Ytj



(1) = E exp −

d 

 Y tj





= E exp −

d 

j=1

=

d  j =1

 (d − j + 1)(Yt(j) − Yt(j−1) )

j =1

ψYt(j) −Yt(j−1) (d − j + 1),

where, in the last equality, the independence assumption between any non-overlapping increments of the stochastic process is used. If the increments are stationary, we further have that ψYt −Yt equals ψYt −t , j ∈ {1, . . . , d}.  (j)

(j−1)

(j)

(j−1)

Proof of Proposition 3.1. Assume (S) to hold. Concerning Part (1)(i), let t = S − (u) and note that, by (8), C (u, . . . , u) = ud Ω ∗ (u− , . . . , u− ) = ud

ψYu− (d) ψYu− (1)d

.

Concerning Part (1)(ii), note that Cˆ (u) =



  (−1)|J | C ((1 − uj )1J (j) )j=1,...,d

J ⊆{1,...,d}

implies that Cˆ (1 − u, . . . , 1 − u) =



d   (−1)|J | Ω (u1J (j) )j=1,...,d u1J (j)

J ⊆{1,...,d}

=



j=1

(−u) Ω (u |J |



1J (j)

 )j=1,...,d .

J ⊆{1,...,d}

For this to be simplified, we need to compute Ω at arguments consisting of us and 1s. Under (S), we have

  Ω (u1J (j) )j=1,...,d = Ω ∗ (u˜ − ),

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

335

˜ − = (u− 1J (j))j=1,...,d ; that is, the jth component of u˜ − is u− = S − (u) if j ∈ J and 0 otherwise. Since Y0 = 0, this where u implies that ψd Ω (u˜ − ) =

j=1 Yu˜ − j



d 

(1) =

ψY − (1) u˜

j =1

ψ|J |Yu− (1) ψYu− (1)|J |

=

ψYu− (|J |) ψYu− (1)|J |

.

j

Therefore, we obtain



Cˆ (1 − u, . . . , 1 − u) =

(−u)|J |

J ⊆{1,...,d}

ψYu− (|J |) ψYu−

Part (2) directly follows from these results.

=

(1)|J |

d    d k=0

k

(−u)k

ψYu− (k) ψYu− (1)k

.



Proof of Proposition 4.1. Assume (NP). Since the Laplace–Stieltjes transform of Z ∼ Poi (γ ) equals ψZ (x) = exp −γ (1 −

  exp(−x)) , x ∈ [0, ∞), γ ∈ (0, ∞), it follows that Sj (t ) = exp(−Mj (t ))ψNt (H ) = exp(−Mj (t ) − Λ(t )(1 − e−H )), j ∈ {1, . . . , d}, as in the claim. Note that the non-homogeneous Poisson process (Nt )t ∈[0,∞) has independent increments with distribution Nt(j) − Nt(j−1) ∼ Poi (Λ(t(j) ) − Λ(t(j−1) )); thus, by Corollary 2.3, d ψ  Yt(j) −Yt(j−1) (d − j + 1)

Ω ∗ (t ) =

=

d ψ  Nt(j) −Nt(j−1) (H (d − j + 1))

ψYt(j) (1) ψNt(j) (H ) j =1   d  exp −(Λ(t(j) ) − Λ(t(j−1) ))(1 − e−H (d−j+1) ) = exp(−Λ(t(j) )(1 − e−H )) j =1   d d   = exp (1 − e−H ) Λ(t(j) ) − (1 − e−H (d−j+1) )(Λ(t(j) ) − Λ(t(j−1) )) . j =1

j=1

j =1

Applying Lemma A.1(2) with c = eH , aj = Λ(tj ), and bj = e−H (d−j+1) , j ∈ {1, . . . , d}, leads to

 Ω (t ) = exp (1 − e ∗

−H

)

 = exp (1 − e−H )

d 

Λ(t(j) ) − (e − 1) H

d 

j =1

j =1

d 



 e

−H (d−j+1)

Λ(t(j) )

(1 − e−H (d−j) )Λ(t(j) ) ,

j =1

so that (11) easily follows via Ω from Corollary 2.4.



Proof of Corollary 4.6. The result follows immediately by noting that the generalized inverses of the marginal survival functions Sj , j ∈ {1, . . . , d}, are given explicitly by Sj− (u) = − log(u)/λj , j ∈ {1, . . . , d}.  Lemma A.2. Assume (Mj ) to hold, and let C be the d-dimensional homogeneous Poisson Sibuya copula as given in (13). For simplicity, let a = 1 − e−H and bj = 1 − e−H (d−j) , j ∈ {1, . . . , d}. Furthermore, let 1 = (1, . . . , 1)⊤ (d-dimensional) and, for a d-dimensional vector u, let uJ denotes the corresponding subvector of dimension |J | containing all components with index in J. Then (1) C J (uJ ) =



j∈J

uj



j∈J

−λ/λ· abj )(j) ;

(u·

 1 (j) d −λ/λ· abj )(j) ; (−1)|J | dj=1 uj J j=d−|J |+1 (u·   abj 1 ( j) |J | −λ/λ · k I J | I | (3) Cˆ (1 − uJ ) = )(jk ) , where J = {j1 , . . . , j|J | } with j1 < · · · < j|J | . k=|J |−|I |+1 (u· I ⊆J (−1) j∈J uj (2) Cˆ (1 − u) =



J ⊆{1,...,d}

Proof. (1) Directly follows from (13) by taking the corresponding margin; that is, by letting uj = 1 for all j ̸∈ J. −1J (·)λ/λ·

(2) First, consider (u·

−1 (·)λ/λ·

)(j) , j ∈ {1, . . . , d}. There are precisely d − |J | of the terms u· J which equal 1 (those for −1 (·)λ/λ· −λ/λ· which 1J (·) = 0); all others are u· ≥ 1. Therefore, if j ∈ {1, . . . , d − |J |}, the jth smallest of all values u· J is 1;

336

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

−λ/λ·

otherwise (u·

)(j) . The implies that  1 (j) (−1)|J | C ((uj J )j∈{1,...,d} ) Cˆ (1 − u) = J ∈{1,...,d}



=

(−1)|J |

d 

J ⊆{1,...,d}

j =1



d 

=

(−1)|J |

J ⊆{1,...,d}

1J (j)

uj

d  −1 (·)λ/λ· abj ( u· J )(j) j =1 d 

1J (j)

uj

j =1

ab

· (u−λ/λ )(j)j . ·

j=d−|J |+1

(3) We have



Cˆ (1 − uJ ) = J

1I (j)

(−1) C (uj |I |



I ⊆{1,...,d}

  )j∈{1,...,d}  

. uj =0 for all j̸∈J

The jth argument of C is uj if j ∈ I ∩ J, 0 if j ∈ I ∩ J c , and 1 if j ∈ I c . Hence, the summands in the last sum are 0 (and thus drop out) if j ∈ I ∩ J c for at least one j ∈ {1, . . . , d}. This implies

  1I ∩J (j)  (−1) C (uj )j∈{1,...,d}  Cˆ (1 − uJ ) =  I ⊆J 

J

|I |

uj =0 for all j̸∈J

   1 (j)   1 (j)  (−1)|I | C J (uj I )j∈J . (−1)|I | C J (uj I ∩J j∈J ) = = I ⊆J

I ⊆J

By Part (1), 1 (j)

C J (uj I )j∈J =







1 (j)

uj I



ab

I (·)λ/λ· (u−1 )(j)j = ·

j∈J

j∈J

|J | 

 1 (j)  uj I (−1)|I | j∈J

I ⊆J

1 (j)

uj I

|J | 

j∈J

−1 (·)λ/λ· ( u· I )(jk )

Similarly as in Part (2), we obtain that {|J | − |I | + 1, . . . , |J |}. Hence, Cˆ J (1 − uJ ) =



abj

I (·)λ/λ· (u−1 )(jk )k . ·

k=1

−1I (·)λ/λ·

= 1 if k ∈ {1, . . . , |J | − |I |} and (u·

−λ/λ·

)(jk ) = (u·

)(jk ) if k ∈

abj

(u·−λ/λ· )(jk )k . 

k=|J |−|I |+1

Proof of Proposition 4.8. Assume the setup and notation of Lemma A.2 to hold. By (13), we obtain C (u, . . . , u) = u

d−λa

d

j=1 bj /λ[j]

. Lemma A.2 Part (1) implies that C J (u, . . . , u) = u|J |−λa d

Cˆ (1 − u, . . . , 1 − u) =

=



(−1)|J |



d

u1J (j)

J ⊆{1,...,d}

j =1



|J | |J |−λa

j∈J bj /λ[j]



. From Part (2) of that lemma we obtain

ab



(u−λ/λ· )(j)j

j=d−|J |+1

(−1) u

d

j=d−|J |+1 bj /λ[j]

J ⊆{1,...,d}

=

d    d k=0

k

(−1)k uk−λa

d

j=d−k+1 bj /λ[j]

.

From Part (3) of Lemma A.2 it follows that

 I ⊆J

j∈J

=



|I | |I |−λa

(−1)|I |



(−1) u

|J | 

u1I (j)

Cˆ J (1 − u, . . . , 1 − u) =

abj

(u−λ/λ· )(jk )k

k=|J |−|I |+1

|J |

b /λ[j ] k=|J |−|I |+1 jk k

I ⊆J

=

 |J |   |J | i=0

i

(−1)i u

i−λa

|J |

b /λ[j ] k=|J |−i+1 jk k

.

Based on these results and by applying l’Hôpital’s Rule for λU ,J and εU , one can derive the results about λL,J , λU ,J , εL , and εU as stated. 

M. Hofert, F. Vrins / Journal of Multivariate Analysis 114 (2013) 318–337

337

Proof of Proposition 4.9. Assume (V). Then Sj (t ) = exp(−Mj (t ))ψYt (1) = exp(−Mj (t ))ψV (f (t )), j ∈ {1, . . . , d}, as in the claim. By Theorem 2.2,

 Ω (t ) = ∗

ψd

j=1 Ytj

d  j =1

(1)

ψYtj (1)

=

ψV d

j=1 f (tj )

d 

ψV

(1)

ψVf (tj ) (1)

=

d 

 f (tj )

j =1

ψV (f (tj ))

,

j =1

so that (15) follows via Ω from Corollary 2.4. The second statement directly follows from the first.



References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26]

J. Bertoin, Lévy Processes, Cambridge University Press, 1998. T. Bielecki, M. Rutkowski, Credit Risk: Modeling, Valuation and Hedging, Springer, 2002. L. Devroye, Non-Uniform Random Variate Generation, Springer, 1986. P. Embrechts, F. Lindskog, A.J. McNeil, Modelling dependence with copulas and applications to risk management, in: S. Rachev (Ed.), Handbook of Heavy Tailed Distributions in Finance, Elsevier, 2003, pp. 329–384. J.D. Esary, F. Proschan, D.W. Walkup, Association of random variables, with applications, The Annals of Mathematical Statistics 38 (5) (1967) 1466–1474. G. Frahm, On the extremal dependence coefficient of multivariate distributions, Statistics & Probability Letters 76 (2006) 1470–1481. C. Genest, K. Ghoudi, L.-P. Rivest, Comment on Understanding relationships using copulas, North American Actuarial Journal 2 (1998) 143–149. M. Hofert, Sampling nested Archimedean copulas with applications to CDO pricing, Ph.D. Thesis, Südwestdeutscher Verlag für Hochschulschriften AG & Co. KG, 2010. M. Hofert, M. Mächler, Nested Archimedean copulas meet R: the nacopula package, Journal of Statistical Software 39 (9) (2011) 1–20. URL: http://www.jstatsoft.org/v39/i09/. M. Hofert, M. Mächler, A.J. McNeil, Likelihood inference for Archimedean copulas in high dimensions under known margins, Journal of Multivariate Analysis 110 (2012) 133–150. http://dx.doi.org/10.1016/j.jmva.2012.02.019. J. Hull, A. White, Dynamic models of portfolio credit risk: a simplified approach, Journal of Derivatives 15 (4) (2008) 9–28. J. Jacod, A.N. Shiryaev (Eds.), Limit Theorems for Stochastic Processes, second ed., in: Grundlehren der Mathematischen Wissenschaften, vol. 288, Springer, 2003. P. Jaworski, F. Durante, W.K. Härdle, T. Rychlik (Eds.), Copula Theory and its Applications, in: Lecture Notes in Statistics—Proceedings, vol. 198, Springer, 2010. H. Joe, Multivariate Models and Dependence Concepts, Chapman & Hall, CRC, 1997. A. Khoudraji, Contributions à l’étude des copules et à la modélisation des valeurs extrêmes bivariées, Ph.D. Thesis, 1995. H. Li, Tail dependence comparison of survival Marshall–Olkin copulas, Methodology and Computing in Applied Probability 10 (1) (2008) 39–54. E. Liebscher, Erratum to Construction of asymmetric multivariate copulas, Journal of Multivariate Analysis 102 (2011) 869–870. B.H. Lindqvist, Association of probability measures on partially ordered spaces, Journal of Multivariate Analysis 26 (1988) 111–132. A.W. Marshall, I. Olkin, Families of multivariate distributions, Journal of the American Statistical Association 83 (403) (1988) 834–841. A.J. McNeil, Sampling nested Archimedean copulas, Journal of Statistical Computation and Simulation 78 (6) (2008) 567–581. A.J. McNeil, R. Frey, P. Embrechts, Quantitative Risk Management: Concepts, Techniques, Tools, Princeton University Press, 2005. A.J. McNeil, J. Nešlehová, From Archimedean to Liouville copulas, Journal of Multivariate Analysis 101 (2010) 1772–1790. R.B. Nelsen, An Introduction to Copulas, Springer, 2006. P.J. Schönbucher, Credit Derivatives Pricing Models, Wiley, 2003. M. Sibuya, Bivariate extreme statistics, I, Annals of the Institute of Statistical Mathematics 11 (2) (1959) 195–210. F.D. Vrins, Analytical pricing of basket default swaps in a dynamic Hull & White framework, Journal of Credit Risk 6 (4) (2010) 85–112.