Tail asymptotic results for elliptical distributions

Tail asymptotic results for elliptical distributions

Insurance: Mathematics and Economics 43 (2008) 158–164 Contents lists available at ScienceDirect Insurance: Mathematics and Economics journal homepa...

390KB Sizes 3 Downloads 86 Views

Insurance: Mathematics and Economics 43 (2008) 158–164

Contents lists available at ScienceDirect

Insurance: Mathematics and Economics journal homepage: www.elsevier.com/locate/ime

Tail asymptotic results for elliptical distributions Enkelejd Hashorva ∗ University of Bern, Institute of Mathematical Statistics and Actuarial Sciences, Sidlerstrasse 5, CH-3012 Bern, Switzerland Allianz Suisse Insurance Company, Laupenstrasse 27, CH-3001 Bern, Switzerland

article

info

Article history: Received February 2008 Received in revised form April 2008 Accepted 22 April 2008

a b s t r a c t Let (X , Y ) be a bivariate elliptical random vector with associated random radius in the Gumbel maxdomain of attraction. In this paper we obtain a second order asymptotic expansion of the joint survival probability P {X > x, Y > y} and the conditional probability P {Y > y|X > x}, for x, y large. © 2008 Elsevier B.V. All rights reserved.

MSC: primary 60F05 secondary 60G70 Keywords: Elliptical random vectors Exact asymptotics Second order bounds Gumbel max-domain of attraction Conditional distribution Conditional quantile function Joint survival probability

d

1. Introduction

(X , Y ) = (S1 , ρ S1 +

Let (S1 , S2 ) be a rotational invariant (spherical) q bivariate

random vector with associated random radius R :=

S12

+

S22 .

If

R > 0 almost surely, then we have the stochastic representation (see Cambanis et al. (1981))

p

1 − ρ 2 S2 ).

(1)

Since for any two constants a, b (see Lemma 12.2.1 of Berman (1992)) d

aS1 + bS2 = S1

p

d

a2 + b2 = S2

p

a2 + b 2 ,

(2)

we have

d

(S1 , S2 ) = (RU1 , RU2 ), where the bivariate random vector (U1 , U2 ) is independent of the associated random radius R and uniformly distributed on the unit d

circle of R2 (= stands for equality of distribution functions). Linear combinations of spherical random vectors define the larger class of elliptical random vectors. Here the canonical examples include the Gaussian and Kotz distributions (see e.g., Fang et al. (1990) or Kotz et al. (2000)). In this paper we consider a bivariate elliptical random vector defined in terms of (S1 , S2 ) and the parameter ρ ∈ (−1, 1) via the stochastic representation

∗ Corresponding address: University of Bern, Institute of Mathematical Statistics and Actuarial Sciences, Sidlerstrasse 5, CH-3012 Bern, Switzerland. E-mail addresses: [email protected], [email protected]. 0167-6687/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.insmatheco.2008.04.005

d

d

d

X = Y = S1 = S2 . Referring to Cambanis et al. (1981) the random variable S1 is symmetric about 0, and furthermore d

S12 = R2 W ,

(3)

with W a Beta random variable with parameters 1/2, 1/2 independent of R. Consequently, if the distribution function F of the random radius R is known, then the distribution functions of X are defined via (3). The basic distributional properties of elliptical random vectors are well known, see e.g., Kotz (1975), Cambanis et al. (1981), Fang et al. (1990), Kano (1994), Szabłowski (1998), Kotz et al. (2000), Hult and Lindskog (2002) and Schmidt and Schmieder (2007) among several others. The tail asymptotic behaviour of each component, say of X , can be determined under some assumptions on the tail asymptotics of

E. Hashorva / Insurance: Mathematics and Economics 43 (2008) 158–164

the associated random radius R. The main work in this direction is done in Carnal (1970), Gale (1980), Eddy and Gale (1981), and Berman (1982, 1983, 1992) among several others. For instance Berman (1992) shows the exact asymptotic behaviour of S1 if R has a distribution function in the Gumbel max-domain of attraction. With motivation from Berman’s work, Hashorva (2007) obtains an exact asymptotic expansion of the bivariate survival probability P {X > x, Y > ax},

a ∈ (−∞, 1]

letting x tend to ∞. See also Asimit and Jones (2007) for a partial result. The main impetus for the present article comes from the recent deep contribution by Abdous et al. (2007), which motivates a refinement of the asymptotic expansion of the joint survival probability obtained in Hashorva (2007). This is achieved assuming some second order asymptotic bounds on the tail asymptotic behaviour of the distribution function F of the associated random radius R. Our results are of a certain theoretical interest providing detailed asymptotic expansions for a classical problem of probability theory — tail asymptotics of random vectors. Furthermore, based on our new results, estimation of joint survival probability, conditional distribution and related quantile function is possible, which is of certain interest in insurance and finance. We choose the following order for the rest of the paper: In Section 2 we present the main results. Illustrating examples follow in Section 3. The proofs of all the results and related asymptotics are relegated to Section 4.

Let (X , Y ) be a bivariate elliptical random vector as in (1), where the associated random radius R has distribution function F with upper endpoint xF ∈ (0, ∞] and F (0) = 0. Hashorva (2007) derives an asymptotic expansion of the tail probability P {X > x, Y > y} for x ↑ xF assuming that F is in the Gumbel max-domain of attraction, i.e.,

u↑xF

1 + (y/x − ρ)2 /(1 − ρ 2 ).

(8)

For x, y positive such that y/x ≥ a > ρ we have αρ,x,y > 1. Since the scaling function w satisfies lim uw(u) = ∞ and

u↑xF

lim w(u)(xF − u) = ∞ if xF < ∞

u ↑x F

(9)

we conclude that the joint tail asymptotics in (7) is faster than the componentwise tail asymptotics. It turns out that for y < ρ x or y close enough to ρ x the tail asymptotics of interest is up to a constant the same as that of P {X > x}. For the bivariate Gaussian distribution it is well known that this fact is closely related to the so-called Savage condition, see Dai and Mukherjea (2001), Hashorva and Hüsler (2003), or Hashorva (2005a) for details. For elliptical distributions the case where y is close to ρ x has been considered independently in Gale (1980), Eddy and Gale (1981) and Berman (1982, 1983, 1992). In this paper we are interested in a refinement of (7) which will be achieved under extra costs related to a second order assumption on F . Explicitly, as suggested in Abdous et al. (2007) we impose the following assumption: A1. Suppose that the distribution function F satisfies (5) with von Mises distribution function F ∗ , and assume further that there exist positive measurable functions Ai , Bi , i = 1, 2 such that for a scaling function w for which (4) holds we have

1 − F ∗ (u + x/w(u)) − exp(−x) ≤ A1 (u)B1 (x) 1 − F ∗ (u)

(10)

and

2. Main results

lim

αρ,x,y :=

159

p

1 − F (u + s/w(u)) 1 − F (u)

= exp(−s),

∀s ∈ R,

|d(u + x/w(u)) − d(u)| ≤ A2 (u)B2 (x)

(11)

for all u large and any x ≥ 0. Furthermore, we assume that limu→∞ A1 (u) = limu→∞ A2 (u) = 0 and B1 , B2 are locally bounded on any compact interval of [0, ∞). Set in the following (whenever Assumption A1 is assumed) A(x) := A1 (x) + A2 (x),

B(x) := B1 (x) + exp(−x)B2 (x),

∀x > 0.

(4)

(12)

with w some positive scaling function. Refer to Galambos (1987), Resnick (1987), Reiss (1989), Embrechts et al. (1997), Falk et al. (2004), Kotz and Nadarajah (2005), or De Haan and Ferreira (2006) for more details on the Gumbel max-domain of attraction. It is well known (see e.g., Resnick (1987)) that (4) is equivalent with the fact that the distribution function F has the following representation

We note in passing that A2 (x) = B2 (x) = 0, x > 0 is the original condition in the aforementioned paper, where it is shown (see Lemma 7 therein) that the class of distribution functions satisfying Assumption A1 is quite large. We consider in the following only distribution functions F with an infinite upper endpoint. Further we assume that ρ ∈ [0, 1). We state now the main result of this paper:

1 − F (x) = d(x)[1 − F ∗ (x)],

(5)

Theorem 1. Let (S1 , S2 ) be a bivariate spherical random vector with associate random radius R ∼ F , where the distribution function F has an infinite upper endpoint and F (0) = 0. Let (X , Y ) be a bivariate

(6)

elliptical random vector with stochastic representation (X , Y ) = p (S1 , ρ S1 + 1 − ρ 2 S2 ), ρ ∈ [0, 1). Assume that (4) holds with the scaling function w and Assumption A1 is valid with Ai , Bi , i = 1, 2, being positive measurable functions (with A, B as in (12)).

x < xF ,

with d(x) a positive function converging to d > 0 as x ↑ xF and

 Z x  F (x) = 1 − exp − w(s) ds , ∗

x < xF ,

z0

where z0 is a finite constant in the left neighbourhood of the upper endpoint xF . The distribution F ∗ is referred to in the following as the von Mises distribution related to F , whereas w is referred to as the von Mises scaling function of F . Under the Gumbel max-domain of attraction assumption on F (see Hashorva (2007)) we have

(1 + o(1))c1 P {X > x, Y > ax} = p P {X > αρ,x,ax x}, xw(αρ,x,ax x)

d

(a) If x, y are positive constants such that y ∈ (ρ x, x], and

R∞ 0

αρ,x,y ≥ c > 1,

B(s) ds < ∞, then for all x large

x → ∞, (7)

with c1 a known constant, provided that a ∈ (ρ, 1] and xF = ∞, where αρ,x,y is defined by

(13)

P {X > x, Y > y} =

αρ,x,y Kρ,x,y 1 − F (αρ,x,y x) 2π xw(αρ,x,y x)   × 1 + O A(αρ,x,y x) +

1

xw(αρ,x,y x)



, (14)

160

E. Hashorva / Insurance: Mathematics and Economics 43 (2008) 158–164

with αρ,x,y as defined in (8) and Kρ,x,y given by Kρ,x,y :=

x2 (1 − ρ 2 )3/2

(d) Since (4) implies that 1 − F is rapidly varying (see e.g., Resnick (1987)) and αρ,x,y ≥ c > 1, then we have

∈ (0, ∞).

(x − ρ y)(y − ρ x)

(15)

(b) Set h(x) := xw(x), x > 0, and let zx , x ∈ R Rbe given constants ∞ such that |zx | < K < ∞ for all x large. If further 0 s−1/2 B(s) ds < p √ ∞, then for all x large and y := x[ρ + zx 1 − ρ 2 / h(x) + ρ/h(x)] we have

(16)

where Φ denotes the standard Gaussian distribution on R. (c) If x, y are positive constants with y < ax, a ∈ (0, ρ), ρ > 0 0

s−1/2 B(s) ds < ∞, then

1 1 − F (x) 1 P {X > x, Y > y} = √ 1 + O A(x) + √ √ h(x) h(x) 2π



 ×



1

h(x)

+



1 − F (αρ,x,y x)



1

O A(x) + √ h(x)

 √

h(x)

o(1)

(17)

xw(x)

1 − F (x)

x → ∞.

We present below an alternative expansion of the tail probability of interest assuming y ∈ (ρ x, x]. Theorem 2. Let (X , Y ), ρ, F be as in Theorem 1. Suppose that (4) holds with the scaling function w and further Assumption A1 is satisfied with Ai , Bi being positive measurable funcLet x, y √ be two positive constants such that (13) holds. If Rtions. ∞ max(1, 1/ s)B(s) ds < ∞ holds (A, B are defined in (12)), then 0 for all x large we have P {X > x, Y > y} = p

αρ,x,y Kρ,x,y 2π xw(αρ,x,y x)

P {X > αρ,x,y x}





1

1 xw(αρ,x,y x)



, (19)

Next, we consider the implications of our results on the tail asymptotics of the conditional survival probabilities. Theorem 3. Under the assumptions and notation of Theorem 2 we have for all x, y large

xw(αρ,x,y x)

= o(1),

3/ 2

x → ∞.

P {Y > y|X > x} =

Consequently by statement (a) in Theorem 1

(20)

w(x)/αρ,x,y x 1 − F (αρ,x,y x) w(αρ,x,y x) 1 − F (x)    1 × 1 + O A(αρ,x,y x) + . xw(αρ,x,y x) p

g (αρ,x,y , x) :=

and if y = ax(1 + o(1)), a ∈ (ρ, 1], then for all large x we have P {X > x, Y > y} = (1 + o(1))

αρ,x,y Kρ,x,y g (αρ,x,y , x), √ 2π

where

αρ,x,y Kρ,x,y 1 − F (αρ,x,y x) , P {X > x, Y > y} = (1 + o(1)) 2π xw(αρ,x,y x) x → ∞, αρ,x,y Kρ,a 1 − F (αρ,x,y x) , 2π xw(αρ,x,y x)

(21)

Furthermore, we have lim P {Y > y|X > x} = 0,

x → ∞,

(22)

x→∞

and for any a ∈ (ρ, 1], z ≥ 0

holds with

(1 − ρ 2 )3/2 := ∈ (0, ∞). (1 − aρ)(a − ρ)

P Y > ax + z /w(αa,ρ x)|X > x



(18)

(b) Sufficient conditions for (10) to hold are derived in Lemma 7 of Abdous et al. (2007). An instance is when limx→∞ w(tx)/w(x) = t θ −1 , θ > 0, ∀t > 0, i.e., the scaling function w defining F ∗ is regularly varying at infinity with index θ − 1. Under the assumptions of Lemma 7 in the aforementioned paper we may choose A1 (u) = O(|(1/w(u))0 |),

B1 (x) = (1 + x)−κ ,

u > 0, x > 0, κ > 1. (c) We have αρ,x,y = 1 iff y = ρ x and for any y ∈ (0, x], x > 0 2

2 1 ≤ αρ, x,y ≤



with αρ,x,y , Kρ,x,y as defined in (8) and (15), respectively.

x → ∞.



Kρ,a

,

1 − F (αρ,x,y x)

× 1 + O A(αρ,x,y x) +

If we assume that limu→∞ A(u) = 0 holds, then we have O A(αρ,x,y x) +

+



Remark 1. (a) By the assumption on F (recall (9)) we have

= o(1),



h(x)

is valid for all large x.

1

1

3/2



1 − F (x)

= 0,

consequently in (17) we have

= O A(x) + √

   1 × 1 + O A(x) + , h(x)

R∞

1 − F (x)



1 1 − F (x) P {X > x, Y > y} = √ [1 − Φ (zx )] √ h(x) 2π

and

1 − F (αρ,x,y x)

lim

x→∞

2 1+ρ

,

ρ ∈ (−1, 1).



3/2

=

αa,ρ Ka,ρ exp(−λa,ρ z )g (αa,ρ , x) √ 2π

(23)

holds locally uniformly with respect to z and x large with Ka,ρ as defined in (18) and

αa,ρ := λa,ρ

p

(1 − 2aρ + a2 )/(1 − ρ 2 ) > 1, a−ρ := p > 0. (1 − ρ 2 )(1 − 2aρ + a2 )

(24)

Convergence to a positive constant in (22) is achieved when y depends on x being close to ρ x for x large. See Gale (1980), Berman (1982, 1983, 1992), Abdous et al. (2007) and Hashorva (2006, 2007)) for elliptical cases, and Hashorva et al. (2007) for generalised symmetrised Dirichlet distributions.

E. Hashorva / Insurance: Mathematics and Economics 43 (2008) 158–164

Remark 2. (a) A slightly more general setup is when Assumption A1 is reformulated considering positive measurable functions Ai , Bi , i ≥ 1 such that for all x > 0 and u large

∞ 1 − F ∗ (u + x/w(u)) X ≤ Ai (u)Bi (x) =: Ψ (u, x) − exp (− x ) 1 − F ∗ (u) i=1

is valid with limu→∞ supi≥1 Ai (u) = 0, i ≥ 1, Ψ (u, x) is finite for all x > 0 and u large. (b) In the asymptotic results above the scaling function w appears prominently. One choice for the scaling function is the von Mises scaling function, i.e., w defines the von Mises distribution function F ∗ in (6). We can choose however another scaling function w defined asymptotically by

w(u) :=

(1 + o(1))[1 − F (u)] R xF , [1 − F (s)] ds u

u ↑ xF .

(25)

Consequently, F is in the Gumbel max-domain of attraction with the scaling function w . Further, we have C P {X > u} = (1 + o(1)) √ uN −δ/2 exp(−ruδ ), 2r δπ



u → ∞.

The von Mises scaling function w is given by

w(u) = c δ uδ−1 − Nuη ,

η := −1.

Hence (26) implies that the approximation under consideration holds choosing A(u) := A1 (u), u > 0, where A1 (u) := O(u−δ ), u > 0. Further we can take B1 (x) = (1 + x)−κ , κ > 1 and B2 (x) = 0, x > 0. Hence the second order assumption in our results above is satisfied. We may thus write for x large and y ∈ (ρ x, x] such that αρ,x,y > c > 1 P {X > x, Y > y} = (1 + o(1))

2−δ+N C αρ, x,y Kρ,x,y

2r δπ

xN −δ

× exp(−r (αρ,x,y x)δ )[1 + O(x−δ )]

Note that the bounding functions Ai , Bi , i = 1, 2 in Assumption A1 depend on which concrete scaling function we choose. In view of Lemma 16 in Abdous et al. (2007) (see its assumptions and Lemma 7 therein) if wF is the von Mises scaling function defining F ∗ in (6) and w is another scaling function defined by (25), then (10) holds with w instead of wF , and A1 instead of A1 , where A1 is defined by A1 (u) = O |(1/wF (u))0 | + |wF (u) − w(u)|/w(u) ,

161

2−δ/2

αρ,x,y Kρ,x,y −δ/2 = (1 + o(1)) √ x 2r δπ N −δ/2

×

∀u > 0.(26)

C αρ,x,y Kρ,x,y



2r δπ

xN −δ exp(−r (αρ,x,y x)δ )

× [1 + O(x−δ )] 2−δ/2

αρ,x,y Kρ,x,y −δ/2 x P {X > αρ,x,y x} = (1 + o(1)) √ 2r δπ × [1 + O(x−δ )]

If F is a von Mises distribution function, then we can make use of Lemma 7 in Abdous et al. (2007) (provided the assumptions of that lemma hold). We discuss in the next lemma the case where F is a mixture distribution. Lemma 4. Let Fi , i ≥ 1 be von Mises distribution functions with the same scaling function w and upper endpoint infinity. Suppose that Assumption A1 is satisfied for all Fi , i ≥ 1 with corresponding functions w, Ai1 , Bi1 , i ≥ 1, and Ai2 , Bi2 , i ≥ 1 identical to 0. If F is another distribution function defined by F (x) =

∞ X

ai Fi (x),

with ai > 0, i ≥ 1 :

∞ X

ai = 1

i=1

i=1

for all x large, then F is in the Gumbel max-domain of attraction. If F is the distribution function of the random radius R in Theorems 2 and 3, then both these theorems hold with A(u) =

∞ X

Ai1 (u),

u > 0,

2−δ/2

αρ,x,y Kρ,x,y P {X ∗ > αρ,x,y x} = (1 + o(1)) √ 2r δπ × [1 + O(x−δ )], with (X ∗ , Y ∗ ) another Kotz Type I random vector with coefficients C , N ∗ = N − δ/2, r , δ . Example 2 (Tail Equivalent Distributions). Let G be a von Mises distribution function with scaling function w such that (10) holds with functions A1 , B1 , and let Fγ ,τ , γ , τ > 0 be another distribution function with infinite upper endpoint. Assume further that 1 − Fγ ,τ (x) = (1 + ax−γ + O(x−τ γ ))[1 − G(x)],

(27)

Clearly, F and G are tail equivalent since lim

x→∞

i=1

∀x > 0.

1 − Fγ ,τ (x) 1 − G(x)

= 1.

provided that i=1 ai 0 Bi1 (s) ds < ∞ and A(u) is bounded for all large u with limu→∞ A(u) = 0.

Suppose further that w(x) := c δ xδ−1 , c > 0, δ > 0, and set d(x) := (1 + ax−γ + O(x−τ −γ )), x > 0. For all large u and x > 0 we have

3. Examples

  |d(u + x/w(u)) − d(u)| = au−γ (1 + x/(uw(u)))−γ − 1   − u−τ −γ 1 + O((1 + x/(uw(u)))−τ γ ) = A2 (u)B2 (x),

P∞

R∞

Next we present three illustrating examples. Example 1 (Kotz Type I). Let (X , Y ) = R(U1 , ρ U1 + 1 − ρ 2 U2 ), ρ ∈ [0, 1) with R a positive random radius independent of the bivariate random vector (U1 , U2 ) which is uniformly distributed on the unit circle of R2 . We call (X , Y ) a Kotz Type I elliptical random vector if

p

1 − F (x) = CxN exp(−cxδ ),

c > 0, C > 0, δ > 0, N ∈ R

for any x > 0. Set w(u) := c δ uδ−1 , u > 0. For any x ∈ R we obtain lim

u→∞

P {R > u + x/w(u)} P {R > u}

= exp(−x).

−[γ +min(τ ,δ)] where ), u > 0 and B2 (x) is such that R ∞ A2 (u) := O(u B ( x ) exp (− x ) dx < ∞ . Hence our asymptotic results in the 2 0 above theorems hold for such F with the function A defined by A(u) := A1 (u) + A2 (u), u > 0.

Example 3 (Regularly Varying w ). Consider F a von Mises distribution function in the Gumbel max-domain of attraction with the scaling function w(x) = F 0 (x)/[1 − F (x)], x > 0 defined by

w(x) :=

c δ xδ−1 1 + t1 (x)

= c δ xδ−1 (1 + o(1)),

c > 0, δ > 0, ∀x > 0,

162

E. Hashorva / Insurance: Mathematics and Economics 43 (2008) 158–164

As in Abdous et al. (2007) we obtain for y/x ≥ ρ

which implies (see Abdous et al. (2007)) that all x large δ

2π P {R cos(Θ ) > x, R cos(Θ − ψ) > y}

P {R > x} = exp(−cx (1 + t2 (x))) holds, where ti (x), i = 1, 2 are two regularly varying functions with index ηδ, η < 0. δ−1

Now choosing w(x) := c δ x instead of w(x) we conclude that the second order correction function A1 satisfies A1 (u) := O(u−δ + uηδ L1 (u)),

u > 0,

= I (αρ,x,y , x) + I (βρ,x,y , y), and if y/x < ρ with x, y positive 2π P {R cos(Θ ) > x, R cos(Θ − ψ) > y}

= 2I (1, x) − I (αρ,x,y , x) + I (βρ,x,y , y), with αρ,x,y , βρ,x,y as defined in (29), hence the proof is complete.

with L1 (u) a positive slowly varying function, i.e., limu→∞ L1 (tu)/L1 (u) = 1, t > 0. It can be easily checked that the theorems above hold for this case with A(u) := A1 (u), u > 0.



Note in passing that

(S1 cos(Θ ), S2 cos(Θ − arccos(ρ))) d

= (S1 cos(Θ ), S2 sin(Θ + arcsin(ρ))),

4. Related asymptotics and proofs In the following lemma we derive a formula for the distribution function of a bivariate elliptical random vector. Define next for a ≥ 1 and x, y positive constants I (a, x) :=

Z



[1 − F (xs)] √ s

a

1

s2

−1

ds,

(28)

Lemma 6. Let F satisfy (4) with the scaling function w and xF = ∞, F (0) = R ∞0. Assume further that Assumption A1 holds. (a) If 0 B(s) ds < ∞, then for any function a := a(x) > 1, x ∈ R and x large we have

(29)

Z

and

p αρ,x,y := 1 + ((y/x) − ρ)2 /(1 − ρ 2 ) ≥ 1, βρ,x,y := αρ,x,y x/y, x, y ∈ R, ρ ∈ (−1, 1).

which leads to the alternative formula derived in Abdous et al. (2007) and Klüppelberg et al. (2007) for the tails of elliptical distributions (see also Lemma 3.3 in Hashorva (2005b)). In the next lemma we consider a real function a(x) > 1, ∀x ∈ R. For notational simplicity we write a instead of a(x).



[1 − F (xs)] √ s

a

Lemma 5. Let (S1 , S2 ) be a bivariate spherical random vector with associate random radius R which has distribution function F . If ρ ∈ [0, 1) and F (0) = 0, then we have: (a) If x > 0 and y ∈ (ρ x, x] P {S1 > x, ρ S1 +

=

1

p

1 − ρ 2 S2 > y}

[I (αρ,x,y , x) + I (βρ,x,y , y)].



(30)

(b) If y/x < ρ and x > 0, y > 0 p P {S1 > x, ρ S1 + 1 − ρ 2 S2 > y} =

1

(31)

Proof. We have the stochastic representation (see Cambanis et al. (1981) and Berman (1992))

(S1 , ρ S1 +

p

1 − ρ 2 S2 ) = (S2 , ρ S2 +



× 1 + O A(ax) +

p

1 − ρ 2 S1 )

with Θ uniformly distributed in (−π , π ) independent of R and ψ := arccos(ρ). For x > 0, y ≥ 0 two constants we may thus write (see Lemma 3.3 in Hashorva (2005b)) 2π P {R cos(Θ ) > x, R cos(Θ − ψ) > y} π/2 arctan(y/x−ρ)/

Z

√ 1−ρ 2

arctan(y/x−ρ)/

P {R > x/ cos(θ )} dθ



1−ρ 2

P {R > y/ cos(θ − ψ)} dθ

+ arccos(ρ)−π/2

π/2

Z =

arctan(y/x−ρ)/

Z

√ 1−ρ 2

P {R > x/ cos(θ )} dθ

π/2−ψ

+ arctan(y/x−ρ)/

√ 1−ρ 2

xw(ax)

.

(32)

Proof. (a) Let x be a given positive constant. Set a := a(s),

va (s) := sw(as),

P {R > y/ cos(θ + ψ)} dθ .

s≥0

and define I (a, x), αρ,x,y , βρ,x,y as in (28) and (29), respectively. Transforming the variables we have I (a, x) =

1 − F (ax)

va (x)

d

=



1

(b) If zxR, x ∈ R is √such that for all x large 0 ≤ zx < K < ∞ and ∞ further z B(s)/ s ds < ∞, then for all large x we have x Z ∞ 1 ds [1 − F (xs)] √ 2 s s −1 1+zx /(xw(x))    p 1 − F (x) √ 1 = √ 2π[1 − Φ ( 2zx )] 1 + O A(x) + . xw(x) xw(x)

= (S1 cos(Θ ), S2 cos(Θ − ψ)),

Z

1 1 − F (ax) ds = √ 2 −1 a a − 1 xw(ax)

(33)

[2I (1, x) − I (αρ,x,y , x) + I (βρ,x,y , y)].





1

s2

×

Z



1 − F (ax + s/w(ax)) 1 − F (ax)

0

1

p ds. (a + s/va (x)) (a + s/va (x))2 − 1

Further (10) implies for any s ≥ 0 and all x large

1 − F (ax + s/w(ax)) ≤ A(ax)B(s). − exp (− s ) 1 − F (ax) Hence utilising further Assumption A1 we may write for all x large

Z ∞ Z ∞ va (x) 1 exp(−s) [ 1 − F ( xs )] ds − ds √ √ 1 − F (ax) 2 2 s s −1 a a −1 a 0 Z ∞ 1 − F (ax + s/w(ax)) ≤ − exp(−s) 1 − F (ax) 0 ×

1

p ds (a + s/va (x)) (a + s/va (x))2 − 1

E. Hashorva / Insurance: Mathematics and Economics 43 (2008) 158–164

thus the first claim follows. (b) Next set a(x) := 1 + zx /(xw(x)), h(x) := xw(x), x > 0. Transforming the variables we obtain for all x, y positive I (1 + zx /h(x), x) =

1 − F (x)

Z





h(x)

zx

×√

1 − F (x)

h(x)(1 + s/h(x)) (1 + s/h(x))2 − 1

Since z s−1/2 B(s) ds x Assumption A1)

R∞

Z



p

<

Since further αρ,x,y ≥ c > 1, applying Lemma 6 we obtain ds.

∞ for all x large we have (recall

1 − F (x + s/w(x)) 1 − F (x)

zx

1

×√

h(x)(1 + s/h(x)) (1 + s/h(x))2 − 1

p



Z −

zx

Z

ds



1 exp(−s) √ ds 2s

1 − F (x + s/w(x)) − exp (− s ) 1 − F (x)



≤ zx

h(x)(1 + s/h(x)) (1 + s/h(x))2 − 1

p

ds

1 p + exp(−s) √ h(x)(1 + s/h(x)) (1 + s/h(x))2 − 1 zx 1 − √ ds 2s  Z ∞ Z ∞ √ 1 1 ≤ A(x) B(s) √ ds + O exp(−s) s ds h(x) 2s z 0  x  1 = O A(x) + . h(x) Z

1

2π P {X > x, Y > y} =

q 2 αρ,x,y αρ, x,y − 1    1 − F (αρ,x,y x) 1 × 1 + O A(αρ,x,y x) + xw(αρ,x,y x) xw(αρ,x,y x) 1 1 − F (αρ,x,y x) q + yw(αρ,x,y x) 2 2 αρ,x,y (x/y) αρ, x,y (x/y) − 1    1 1 − F (αρ,x,y x) × 1 + O A(αρ,x,y x) + = xw(αρ,x,y x) xw(αρ,x,y x)  1

×

1

×√

with F ∗ the von Mises distribution function given in (6). We assume for simplicity in the following that the function d(·) is a constant for all x large, implying A2 (u) = 0 for all u large. (a) Let x, y be two positive constants such that y > ρ x and αρ,x,y ≥ c > 1. In order to complete the proof we need a formula for the survival probability P {X > x, Y > y}. In view of (30) we have for y > ρ x and x, y positive 2π P {X > x, Y > y} = I (αρ,x,y , x) + I (βρ,x,y , y).

1 − F (x + s/w(x)) 1

163

d(u + x/w(u)) 1 − F ∗ (u + x/w(u)) ≤ − exp(−x) d(u) 1 − F ∗ (u) d(u + x/w(u)) − 1 + exp(−x) d(u) ≤ (1 + ε)A1 (u)B1 (x) + exp(−x)A2 (u)B2 (x),

∞ 1 p + exp(−s) (a + s/va (x)) (a + s/va (x))2 − 1 0 1 ds − √ 2 a a −1   Z ∞ 1 1 ≤ √ A(ax) B(s) ds + O va (x) a a2 − 1 0   1 = O A(ax) + , va (x) Z

1

+

q

q 2 2 2 αρ,x,y αρ, αρ,x,y αρ, x,y − 1 x,y (x/y) − 1    1  = 1 − F (αρ,x,y x) + O A(αρ,x,y x) + xw(αρ,x,y x) xw(αρ,x,y x)



Consequently since zx is bounded for all x large I (1 + zx /h(x), x) =

1 − F (x)

Z



1

exp(−s) √ ds √ h(x) zx 2s    1 × 1 + O A(x) + h(x) p 1 − F (x) √ = √ 2π[1 − Φ ( 2zx )] h(x)    1 × 1 + O A(x) + h(x)

is valid with Φ the standard Gaussian distribution function on R. Thus the proof is complete.  Proof of Theorem 1. Define I (a, x) and αρ,x,y , βρ,x,y as in (28) and (29), respectively. Assumption A1 implies that for any x > 0, ε > 0 and u large we have

1 − F (u + x/w(u)) − exp(−x) 1 − F (u)

 ×

x2 (1 − ρ 2 )3/2 αρ,x,y

(y − ρ x)(x − ρ y)

 + O A(αρ,x,y x) +

1 xw(αρ,x,y x)



.

(b) Let zx , x ∈ R be constants bounded for all x. We may write for all x, y P {Y > y|X > x} = P {X > x}

P {X > x, Y > y} P {X > x}

=: P {X > x}χ (x, y).

For any x positive P {X > x} =

1

π

I (1, x),

hence Lemma 6 implies for all x large enough 1 1 − F (x) 1 P {X > x} = √ 1 + O A(x) + √ h(x) h(x) 2π







,

with h(x) := xw(x), x > 0. In view of Theorem 3 in Abdous et al. (2007) we have for all large x

   1 . χ (x, y) = [1 − Φ (zx )] 1 + O A(x) + h(x) Hence for all large x 1 1 − F (x) P {X > x, Y > y} = √ [1 − Φ (zx )] √ h(x) 2π

  2 1 × 1 + O A(x) + , h(x) thus the result follows.

164

E. Hashorva / Insurance: Mathematics and Economics 43 (2008) 158–164

(c) Now we consider the last case. By (31) for all x, y large y ≤ ax < ρ x (implying y ∈ (0, x]) 2π P {X > x, Y > y} = 2I (1, x) − I (αρ,x,y , x) + I (βρ,x,y , y), where αρ,x,y ≥ 1 + (a − ρ)2 /(1 − ρ)2 > 1 and βρ,x,y := αρ,x,y x/y > 1. By the above results we have for all y < ax and x, y large enough

p

1 − F (x)





1 1 + O A(x) + P {Y > y|X > x} = √ √ h ( x) h(x) 2π  1 1 − F (αρ,x,y x) 1

+ √

h(x)

thus the proof is complete.

Acknowledgement I am indebted to the referee for several corrections and suggestions.

,

1 − F (x)



References

Proof of Theorem 2. From the proof of Theorem 1 we obtain for all large x 1 − F (x) 1 1 + O A(x) + P {X > x} = √ xw(x) 2π xw(x)







.

Since further αρ,x,y ≥ 1 and it is bounded for all x large and y positive such that (12) holds, we may write for all x large 1 − F (αρ,x,y x) P {X > αρ,x,y x} = p 2π αρ,x,y xw(αρ,x,y x)





× 1 + O A(αρ,x,y x) +



1 xw(αρ,x,y x)

. (34)

Applying Theorem 1 we have 3/ 2

αρ,x,y Kρ,x,y P {X > x, Y > y} = p 2π xw(αρ,x,y x)π ×

1 − F (αρ,x,y x) P {X > αρ,x,y x} 2π αρ,x,y xw(αρ,x,y x)

p





× 1 + O A(αρ,x,y x) + = p

3/2 ρ,x,y Kρ,x,y

α

2π xw(αρ,x,y x)



1

= p

α

2π xw(αρ,x,y x)



P {X > αρ,x,y x}

× 1 + O A(αρ,x,y x) + 3/2 ρ,x,y Kρ,x,y

P {X > αρ,x,y x}

xw(αρ,x,y x)



1

2

xw(αρ,x,y x)

P {X > αρ,x,y x}

  × 1 + O A(αρ,x,y x) + thus the result follows.

1 xw(αρ,x,y x)



,



Proof of Theorem 3. Since P {Y > x|X > x} = P {X > x, Y > y}/P {X > x},

∀x > 0

the proof of the first claim follows easily utilising the asymptotic expansion of Theorem 1 and (34). The fact that the distribution function F is in the Gumbel max-domain of attraction with the scaling function w implies also that the random variable X has a distribution function in the Gumbel max-domain of attraction (Berman (1992), Hashorva (2005b)) with the same scaling function w. If y are positive constants such that αρ,x,y > c > 1 holds, then for all large x lim

x→∞

P {X > αρ,x,y x} P {X > x}

= 0,

hence the result follows.



Proof of Lemma 4. Since Fi , i ≥ 1 are in the Gumbel max-domain of attraction with the same scaling function w it follows easily that the distribution function F is in the Gumbel max-domain of attraction with the same scaling function w . It can be easily checked that both Theorem 2 and Theorem 3 hold with A(u) := P∞ A ( u), u > 0, hence the result follows.  i1 i=1

Abdous, B., Fougères, A.-L., Ghoudi, K., Soulier, P., 2007. Estimation of bivariate excess probabilities for elliptical models www.arXiv:math.ST/0611914v2. Asimit, A.V., Jones, B.L., 2007. Extreme behavior of bivariate elliptical distributions. Insurance: Mathematics and Economics 41 (1), 53–61. Berman, M.S., 1982. Sojourns and extremes of stationary processes. The Annals of Probability 10, 1–46. Berman, M.S., 1983. Sojourns and extremes of Fourier sums and series with random coefficients. Stochastic Processes and their Applications 15, 213–238. Berman, M.S., 1992. Sojourns and Extremes of Stochastic Processes. Wadsworth & Brooks/Cole, Boston. Cambanis, S., Huang, S., Simons, G., 1981. On the theory of elliptically contoured distributions. Journal of Multivariate Analysis 11 (3), 368–385. Carnal, H., 1970. Die konvexe Hülle von n rotations-symmetrisch verteilten Punkten. Z. Wahrscheinlichkeitstheorie Verw. Geb. 15, 168–176. Dai, M., Mukherjea, A., 2001. Identification of the parameters of a multivariate normal vector by the distribution of the minimum. Journal of Theoretical Probability 14 (1), 267–298. De Haan, L., Ferreira, A., 2006. Extreme Value Theorey. An Introdution. Springer, New York. Eddy, W.F., Gale, J.D., 1981. The convex hull of a spherically symmetric sample. Advances in Applied Probability 13, 751–763. Embrechts, P., Klüppelberg, C., Mikosch, T., 1997. Modelling Extremal Events for Insurance And Finance. Springer, Berlin. Falk, M., Hüsler, J., Reiss, R.-D., 2004. Laws of Small Numbers: Extremes and Rare Events, Second Edition. In: DMV Seminar, vol. 23. Birkhäuser, Basel. Fang, K.-T., Kotz, S., Ng, K.-W., 1990. Symmetric Multivariate and Related Distributions. Chapman and Hall, London, United Kingdom. Galambos, J., 1987. Asymptotic Theory of Extreme Order Statistics, 2nd ed.. Krieger, Malabar, Florida. Gale, J.D., 1980. The asymptotic distribution of the convex hull of a random sample. Ph.D. Thesis, Carnegie-Mellon University. Hashorva, E., 2005a. Asymptotics and bounds for multivariate Gaussian tails. Journal of Theoretical Probability 18 (1), 79–97. Hashorva, E., 2005b. Elliptical triangular arrays in the max-domain of attraction of Hüsler-Reiss distributon. Statistics & Probability Letters 72 (2), 125–135. Hashorva, E., 2006. Gaussian approximation of conditional elliptical random vectors. Stochastic Models 22 (3), 441–457. Hashorva, E., 2007. Exact tail asymptotics for Type I bivariate elliptical distributions. Albanian Journal of Mathematics 1 (2), 99–114. Hashorva, E., Hüsler, J., 2003. On multivariate gaussian tails. Annals of the Institute of Statistical Mathematics 55 (3), 507–522. Hashorva, E., Kotz, S., Kume, A., 2007. Lp -norm generalised symmetrised Dirichlet distributions. Albanian Journal of Mathematics 1 (1), 31–56. Hult, H., Lindskog, F., 2002. Multivariate extremes, aggregation and dependence in elliptical distributions. Advances in Applied Probability 34 (3), 587–608. Kano, Y., 1994. Consistency property of elliptical probability density functions. Journal of Multivariate Analysis 51 (1), 139–147. Klüppelberg, C., Kuhn, K., Peng, L., 2007. Estimating the tail dependence of an elliptical distribution. Bernoulli 13 (1), 229–251. Kotz, S., Balakrishnan, N., Johnson, N.L., 2000. Continuous Multivariate Distributions, Second Edition. Wiley, New York. Kotz, S., 1975. Multivariate distributions at a cross-road. In: Statistical Distributions in Scientific Work, vol. 1, Patil, G.P., Kotz, S., and Ordeds, J.K., D. Riedel, Dortrecht, pp. 240–247. Kotz, S., Nadarajah, S., 2005. Extreme Value Distributions, Theory and Applications. Imperial College Press, London, United Kingdom, Second Printing. Reiss, R.-D., 1989. Approximate Distributions of Order Statistics: With Applications to Nonparametric Statistics. Springer, New York. Resnick, S.I., 1987. Extreme Values, Regular Variation and Point Processes. Springer, New York. Schmidt, R., Schmieder, C., 2007. Modelling dynamic portfolio risk using risk drivers of elliptical processes. Insurance: Mathematics and Economics (Available online). Szabłowski, P.J., 1998. Uniform distributions on spheres in finite dimensional Lα and their generalizations. Journal of Multivariate Analysis 64 (2), 103–117.