Statistics and Probability Letters 83 (2013) 1787–1799
Contents lists available at SciVerse ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
Ruin probabilities of a two-dimensional risk model with dependent risks of heavy tail✩ Xinmei Shen a , Yi Zhang b,∗ a
School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024, China
b
Department of Mathematics, Zhejiang University, Hangzhou, 310027, China
article
info
Article history: Received 2 June 2012 Received in revised form 20 December 2012 Accepted 30 March 2013 Available online 16 April 2013 MSC: 60G50 62P05 60F10
abstract This paper considers a two-dimensional discrete time risk model with constant interest rates, and individual net losses in ERV (−α, −β), the class of extended regular variations with indices 0 < α ≤ β < ∞. Some asymptotic results for both finite-time and infinitetime ruin probabilities under two types of ruin times are established. The two components of net losses are allowed to be generally dependent. © 2013 Published by Elsevier B.V.
Keywords: Ruin probability Two-dimensional Extended regular variation distributions
1. Introduction In recent years, the multi-dimensional risk models are focused on more and more. See for example, Chan et al. (2003) and Cai and Li (2005, 2007). As pointed out in Cai and Li (2005), an unexpected claim event might induce more than one type of claim in an umbrella insurance policy. A typical example is motor insurance where an accident could cause claims for vehicle damages and the third party injuries. More about the background of multi-dimensional risk models can be found in Chan et al. (2003) and references therein. When the individual risks are heavy-tailed, the ruin-related problems under multi-dimensional risk models are very complex, even in a two-dimensional case. So far as we known, Li et al. (2007) first investigate the explicit asymptotic estimate for the finite-time ruin probability of a two-dimensional risk model where the claims are heavy-tailed. But they assume that the claim vectors consist of independent components. In real practice, various types of claims in an umbrella insurance policy are often correlated, and ignoring the dependence often results in over-estimating or under-estimating the ruin probabilities. ⃗k = In this paper, we consider a two-dimensional discrete time risk model that consists of two sub-portfolios. Let X (X1,k , X2,k )⊤ be a random vector representing the net losses for an insurance company (i.e. the total claim amounts minus the total premium incomes) during the kth period, k ≥ 1, and r (≥0) be the constant interest rate. We suppose that the net losses
✩ Research supported by Fundamental Research Funds for the Central Universities, National Natural Science Foundation of China (Nos 11101062, and 10901138), Specialized Research Fund for the Doctor Program of Higher Education (No. 20100041120038), Research Foundation for Doctor of Liaoning Province (No. 20121016), and MOE Project of Key Research Institute of Humanities and Social Sciences at Universities (No: 11JJD790053). ∗ Corresponding author. Tel.: +86 571 87953667. E-mail addresses:
[email protected],
[email protected] (Y. Zhang).
0167-7152/$ – see front matter © 2013 Published by Elsevier B.V. http://dx.doi.org/10.1016/j.spl.2013.03.029
1788
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
are calculated at the end of the year and that the insurance company starts with initial capital vector of two sub-portfolios
⃗x = (x1 , x2 )⊤ . We allow X1,k , X2,k to be dependent. Thus, the discounted surplus vector, denoted by U⃗i = (U1,i , U2,i )⊤ , accumulated till the end of year i is
⃗0 = ⃗x = U
x1 x2
,
⃗i = U
U1,i U2,i
i
X1,k (1 + r ) x1 ( 1 + r ) − k =1 = , i i i −k x2 ( 1 + r ) − X2,k (1 + r ) i
i −k
i ≥ 1.
(1.1)
k =1
For notational convenience, throughout this paper, all limit relationships are for x → ∞ unless stated otherwise. For a(x) two positive functions a(x), b(x), throughout this paper, we write a(x) . b(x) if lim supx→∞ b(x) ≤ 1, a(x) & b(x) if a(x)
lim infx→∞ b(x) ≥ 1, a(x) ∼ b(x) if both hold. We shall use the symbol x+ = max{x, 0}. Denote Si,n = And we use the following assumptions.
n
k =1
Xi,k , i = 1, 2.
⃗i = (X1,i , X2,i )⊤ , i ≥ 1} is a sequence of independent copies of the random vector X⃗ = (X1 , X2 )⊤ with common H1. {X marginal distribution function F1 (of X1 ), F2 (of X2 ), joint distribution F1,2 , and joint survival function F 1,2 (x1 , x2 ) = P(X1 > x1 , X2 > x2 ). The random vector (−X1 , −X2 ) has a copula C (·, ·). H2. The tail distribution F 1 = 1 − F1 is extended regularly varying with indices 0 < α ≤ β < ∞, denoted by F 1 ∈ ERV (−α, −β), i.e. v −β ≤ lim inf x→∞
F 1 (v x) F 1 (x)
≤ lim sup
F 1 (v x)
x→∞
F 1 ( x)
≤ v −α ,
(1.2)
for all v ≥ 1, and F1 , F2 are tail-equivalent, that is, F 2 (x)
lim
x→∞
F 1 (x)
= c,
(1.3)
for some constant c ∈ (0, ∞). Moreover, suppose that the left tail in each univariate marginal function is lighter than its right tail, i.e., Fi (−x)
lim
x→∞
F i (x)
= 0,
i = 1, 2.
(1.4)
H3. There exists 1 ≤ γ < ∞ such that lim
C (v g (x), v f (x))
x→ 0
C (g (x), f (x))
= vγ ,
v > 0,
(1.5)
for all f (x) > 0 and g (x) > 0, satisfying lim f (x) = lim g (x) = 0.
x→ 0
(1.6)
x→ 0
H4. For any positive constants a, b > 0, there exists 0 ≤ λ < 1 such that lim
x→∞
C (F 1 (ax), F 2 (bx)) F 1 (ax) + F 2 (bx)
= λ.
(1.7)
Remark 1.1. From assumption H1, one can see F 1,2 (x1 , x2 ) = C (F 1 (x1 ), F 2 (x2 )) due to Sklar’s Theorem. For 0 < c < ∞, relation (1.3) in assumption H2 guarantees that both the tails of marginal distributions F 1 , F 2 ∈ ERV (−α, −β); see Embrechts et al. (1997). Remark 1.2. Assumption H3 is satisfied by a fair number of important families of copulas. We state some families in the following. (1) If −X1 and −X2 are dependent with Farlie–Gumbel–Morgenstern family copula, that is, C (u1 , u2 ) = u1 u2 [1 + θ (1 − u1 )(1 − u2 )], |θ | ≤ 1, then for f (x) and g (x) satisfying (1.6), lim
x→0
C [v f (x), v g (x)] C [f (x), g (x)]
= v2 ,
v > 0.
(2) If −X1 and −X2 are dependent with Cuadras–Augé family copula, that is, C (u1 , u2 ) = [min(u1 , u2 )]θ [u1 u2 ]1−θ , 0 ≤ θ ≤ 1, then for f (x) and g (x) satisfying (1.6), lim
x→0
C [v f (x), v g (x)] C [f (x), g (x)]
= v 2−θ ,
v > 0.
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
1789
(3) Assume that a continuous and strictly decreasing convex function φ : [0, 1] → [0, ∞] is the generator of the Archimedean copula C (u1 , u2 ) =: φ −1 (φ(u1 ) + φ(u2 )) , where φ −1 is the generalized inverse of φ , and φ(0) = ∞. If φ is regularly varying at 0 with index −ρ < 0, i.e. φ ∈ R−ρ (0), then φ −1 ∈ R−ρ −1 , that is, for any v > 0,
φ −1 (v x) −1 = v −ρ ; x→∞ φ −1 (x) lim
see Resnick (2007). Then for f (x) and g (x) satisfying (1.6), lim
C [v f (x), v g (x)]
x→ 0
C [f (x), g (x)]
= v.
Remark 1.3. Assumption H4 represents the relationship between the joint survival function F 1,2 and the univariate marginal functions F i , i = 1, 2. Assumption H4 is satisfied by a lot of important families of copulas. Obviously, the Farlie–Gumbel–Morgenstern family copula and the Cuadras–Augé family copula mentioned in Remark 1.2 satisfy assumption H4 with λ = 0, while the Archimedean copula with φ(t ) ∼ t −ρ and F 1 (x) ∈ R−α (0 < α < ∞) also satisfies
assumption H4 with λ = (1 + c −ρ (b/a)αρ )−ρ when F 1 (x) ∈ R−α , (1.7) can be reduced to lim
C (F 1 (ax), F 2 (bx)) F 1 (ax)
x→∞
−1
/(c (b/a)−α ) ∈ (0, 1), where limx→∞ F 2 (x)/F 1 (x) = c ∈ (0, ∞). In the case
= λ′ ∈ [0, 1],
and we can obtain that 0 ≤ lim
x→∞
P(X1 > ax, X2 > bx) P(X1 > ax) + P(X2 > bx)
=
λ′ < 1. 1 + c (b/a)−α
Moreover, if X1 , X2 are identically distributed with distribution F and a = b, then one can see
λ=
1 1 lim P(X2 > x|X1 > x) = lim P(F (X2 ) > u|F (X1 ) > u), 2 x→∞ 2 u→1
where limu→1 P(F (X2 ) > u|F (X1 ) > u) is the (upper) tail dependence coefficient; see Albrecher et al. (2006). There are several types of ruin times for the multi-dimensional risk model. If we denote one-dimensional ruin times T1 = min{n : U1,n < 0} and T2 = min{n : U2,n <0} with the convention that min{∅} = +∞, then we can define the ruin times for the two-dimensional case as Tand = T1 T2 and Tor = T1 T2 . Thus, we have the corresponding finite-time ruin probabilities:
⃗0 = ⃗x , ψand (⃗x, n) = P min Uj,i < 0 U 0≤i≤n j=1 2 ⃗ ψor (⃗x, n) = P min Uj,i < 0 U0 = ⃗ x , 0≤i≤n 2
j =1
and the corresponding infinite-time ruin probabilities can be discredited as follows:
2
⃗ ψand (⃗x) = P min Uj,i < 0 U0 = ⃗ x , 0≤i<∞ j =1 2 ⃗0 = ⃗x . ψor (⃗x) = P min Uj,i < 0 U 0≤i<∞ j=1
Other types of ruin time, such as Tsum = min{n : U1,n + U2,n < 0} also can be introduced. However, we remark that if we use Tsum to define the ruin probabilities then all problems will be reduced to those in the one-dimensional risk model. From the definition of Tand and Tor , we can see Tand ≥ Tor . Tand represents a more critical time than Tor , but early warning can help the insurance company to adjust her strategy so as to survive. We consider the ruin probabilities based on Tand and Tor respectively. 2. Main results In the proof of our main results, we need the following lemmas. Besides its critical role in the proof of Theorem 2.1, Lemma 2.2 has its own interest.
1790
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
⃗i , i ≥ 1} is a sequence of nonnegative random vectors satisfying assumptions H1 and H3. If (1.2) Lemma 2.1. Suppose that {X and (1.3) hold, then it holds for any positive constants a, b > 0, v ≥ 1, v −βγ ≤ lim inf x→∞
C (F 1 (v ax), F 2 (v bx)) C (F 1 (ax), F 2 (bx))
≤ lim sup x→∞
C (F 1 (v ax), F 2 (v bx)) C (F 1 (ax), F 2 (bx))
≤ v −αγ .
(2.1)
Proof. By (1.2) and (1.3), for any 0 < ϵ < 1, and x large enough, we have
(v −β − ϵ)F l (x) ≤ F l (v x) ≤ (v −α + ϵ)F l (x),
l = 1, 2,
where v ≥ 1. By Remark 1.1 and the increasing monotonicity of the copula, lim sup x→∞
C (F 1 (v ax), F 2 (v bx))
≤ lim sup
C (F 1 (ax), F 2 (bx))
C ((v −α + ϵ)F 1 (ax), (v −α + ϵ)F 2 (bx)) C (F 1 (ax), F 2 (bx))
x→∞
= (v −α + ϵ)γ ,
where in the last step, we have used assumption H3. Let ϵ → 0, we have, lim sup x→∞
C (F 1 (v ax), F 2 (v bx))
≤ v −αγ .
C (F 1 (ax), F 2 (bx))
Similarly, we can get, lim inf x→∞
C (F 1 (v ax), F 2 (v bx)) C (F 1 (ax), F 2 (bx))
≥ v −βγ .
⃗i , i ≥ 1} is a sequence of nonnegative random vectors satisfying assumptions H1–H3. Then for any Lemma 2.2. Suppose that {X positive constants a, b > 0 and fixed integer n ≥ 1, P
n
X1,i > ax,
i =1
n
X2,j > bx
∼
j =1
n n
P X1,i > ax, X2,j > bx .
(2.2)
i =1 j =1
Proof. Apparently, (2.2) holds for n = 1. Hence we suppose n ≥ 2. On the one hand, we have
P S1,n > ax, S2,n > bx ≥ P
≥
max {X1,i } > ax, max {X2,j } > bx
1≤i≤n
n n
1≤j≤n
P(X1,i > ax, X2,j > bx) −
i =1 j =1
−
n
n
P(X1,i > ax, X2,j1 > bx, X2,j2 > bx)
i=1 j1 ̸=j2
P(X1,i1 > ax, X1,i2 > ax, X2,j > bx)
i1 ̸=i2 j=1
−
P(X1,i1 > ax, X1,i2 > ax, X2,j1 > bx, X2,j2 > bx).
i1 ̸=i2 j1 ̸=j2
Note that
P X1,i > ax, X2,j1 > bx, X2,j2 > bx = P X1,i > ax, X2,i > bx P X2,j2 > bx
where i = j1 , j1 ̸= j2 , while
P X1,i > ax, X2,j1 > bx, X2,j2 > bx = P X1,i > ax, X2,j2 > bx P X2,j1 > bx
where i ̸= j1 , j1 ̸= j2 . Hence, we get n
lim
x→∞
P(X1,i > ax, X2,j1 > bx, X2,j2 > bx)
i=1 j1 ̸=j2 n n
= 0. P(X1,i > ax, X2,j > bx)
i=1 j=1
Similarly, one can obtain that n
lim
x→∞
P(X1,i1 > ax, X1,i2 > ax, X2,j > bx)
i1 ̸=i2 j=1 n n i=1 j=1
= 0. P(X1,i > ax, X2,j > bx)
(2.3)
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
1791
For the fourth sum on the right hand side of (2.3), noting that
P(X1,i1 > ax, X1,i2 > ax, X2,j1 > bx, X2,j2 > bx) ≤ P(X1,i1 > ax, X2,j1 > bx, X2,j2 > bx), hence, by the same calculation as the second sum on the right hand side of (2.3), we obtain that
lim
P(X1,i1 > ax, X1,i2 > ax, X2,j1 > bx, X2,j2 > bx)
i1 ̸=i2 j1 ̸=j2
= 0.
n n
x→∞
P(X1,i > ax, X2,j > bx)
i=1 j=1
Therefor, it is easy to see that n n
P S1,n > ax, S2,n > bx &
P X1,i > ax, X2,j > bx .
(2.4)
i=1 j=1
On the other hand, for any fixed number v such that 1/2 < v < 1, 0 < y = (1 − v)/(n − 1) < 1/n < v , we have
P S1,n > ax, S2,n > bx ≤
n n
P X1,i > v ax, X2,j > v bx
i=1 j=1
+ P S1,n > ax, S2,n
n n > bx, {X1,i ≤ v ax}, {X2,j ≤ v bx} i =1
j =1
n
+ P S1,n > ax, S2,n
n > bx, {X1,i ≤ v ax}, {X2,j > v bx} i =1
+ P S1,n > ax, S2,n
j =1
n n > bx, {X1,i > v ax}, {X2,i ≤ v bx} i =1
j =1
=: K1 + K2 + K3 + K4 . By assumption H2 and Lemma 2.1, it is easy to get that for 1/2 < v < 1, lim sup
P(X1,i > v ax, X2,j > v bx) P(X1,i > ax, X2,j > bx)
x→∞
≤
v −2β , v −βγ ,
i ̸= j, i = j,
(2.5)
which yields that lim lim sup
v→1
x→∞
K1 n n
= 1.
(2.6)
P(X1,i > ax, X2,j > bx)
i=1 j=1
Next we shall estimate K2 .
K2 = P S1,n > ax, S2,n
n n ax bx > bx, {X1,i ≤ v ax}, {X2,j ≤ v bx}, max X1,i > , max X2,j > i=1
≤
P S1,n > ax, S2,n > bx, X1,i ≤ v ax, X2,j ≤ v bx, X1,i >
i=1 j=1
≤
n n i=1 j=1
≤
P
X1,s1 > (1 − v)ax, X1,i >
s1 ̸=i
n n
ax n
n
1≤i≤n
j =1
n n
,
ax n
, X2,j >
1≤j≤n
bx
P X1,s1 > yax, X2,s2 > ybx, X1,i > yax, X2,j > ybx .
n
n
X2,s2 > (1 − v)bx, X2,j >
s2 ̸=j
bx
n (2.7)
i=1 j=1 s1 ̸=i s2 ̸=j
In the light of (2.5), one can then conclude that lim
x→∞
K2 n n i=1 j=1
P(X1,i > ax, X2,j > bx)
= 0,
(2.8)
1792
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
since the right hand side of (2.7) equals
P(X1,i > yax, X2,j > ybx)P(X1,s1 > yax, X2,s2 > ybx)
i=j,s1 ̸=i,s2 ̸=j
+
P(X1,i > yax, X2,s2 > ybx)P(X1,s1 > yax, X2,j > ybx)
i̸=j,s1 ̸=i,s2 ̸=j,s1 ̸=s2
+
P(X1,i > yax)P(X2,j > ybx)P(X1,s1 > yax, X2,s2 > ybx).
i̸=j,s1 ̸=i,s2 ̸=j,s1 =s2
Now we turn to consider K3 .
n n ax > ax, {X1,i ≤ v ax}, {X2,j > v bx}, max X1,i >
K3 ≤ P S1,n
i =1 n
≤
n
P S1,n > ax, X1,i ≤ v ax, X2,j > v bx, X1,i >
ax
j=1 i=1
≤
n n
P
j=1 i=1
≤
n
1≤i≤n
j =1
X1,s1 > (1 − v)ax, X2,j > v bx, X1,i >
s1 ̸=i
n n
n ax
n
P X1,s1 > yax, X2,j > ybx, X1,i > yax .
i=1 j=1 s1 ̸=i
Comparing the computation with (2.8), still by (2.5), one can immediately obtain lim
x→∞
K3 n n
= 0.
(2.9)
P(X1,i > ax, X2,j > bx)
i =1 j =1
The estimation of K4 is the same as K3 ; hence lim
x→∞
K4 n n
= 0.
(2.10)
P(X1,i > ax, X2,j > bx)
i =1 j =1
As a result, combining (2.6) and (2.8)–(2.10) yields
P S1,n > ax, S2,n > bx .
n n
P(X1,i > ax, X2,j > bx).
(2.11)
k=1 j=1
Consequently, (2.4) and (2.11) imply the desired result.
We further suppose that the total initial capital of the insurance company is x, and x is allotted to two sub-portfolios, i.e.
⃗x = (a1 x, a2 x)⊤ , and a1 + a2 = 1. With the above preparations we state our main results in the following theorems.
Theorem 2.1. Suppose that assumptions H1–H3 are satisfied for model (1.1), then we have for any fixed integer n ≥ 1,
ψand (⃗x, n) ∼
n n
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x .
(2.12)
k=1 j=1
Moreover, if assumption H4 also holds for model (1.1), then we have for any fixed integer n ≥ 1,
ψor (⃗x, n) ∼
n
P(X1,k > a1 (1 + r )k x) +
k=1
−
n
P(X2,k > a2 (1 + r )k x)
k=1
n n
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x .
(2.13)
k=1 j=1
Theorem 2.2. Suppose that assumptions H1–H3 are satisfied for model (1.1), then we have
ψand (⃗x) ∼
∞ ∞
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x .
k=1 j=1
(2.14)
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
1793
Moreover, if assumption H4 also holds for model (1.1), then we have for any fixed integer n ≥ 1,
ψor (⃗x) ∼
∞
P(X1,k > a1 (1 + r )k x) +
k=1
−
∞
P(X2,k > a2 (1 + r )k x)
k=1
∞ ∞
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x .
(2.15)
k=1 j=1
3. Proof of main results Proof of Theorem 2.1. For any fixed integer n ≥ 1,
ψand (⃗x, n) = P
⃗0 = ⃗x min U1,i < 0, min U2,i < 0 U 1≤i≤n 1≤i≤n
= P max
1≤i≤n
i
X1,k (1 + r )
−k
i
> a1 x, max
1≤i≤n
k=1
X2,k (1 + r )
−k
> a2 x ,
k=1
and we have n
Xl,k (1 + r )−k ≤ max
0≤m≤n
k=1
m
Xl,k (1 + r )−k ≤
k=1
n
−k Xl+ ,k (1 + r ) ,
l = 1, 2.
k=1
+ −k −k Noting that {Xl+ > a1 x, X2+,j (1 + r )−j > a2 x) = P(X1,k > ,k (1 + r ) } satisfies the conditions of Lemma 2.2, and P(X1,k (1 + r )
a1 (1 + r )k x, X2,j > a2 (1 + r )j x), hence, we have
ψand (⃗x, n) ≤ P
n
X1+,k (1
+ r)
−k
> a1 x ,
k=1
∼
n
X2+,k (1
+ r)
−k
> a2 x
k=1
n n
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x .
(3.1)
k=1 j=1
Thus, it suffices to show for n ≥ 1,
ψand (⃗x, n) ≥ P
n
X1,k (1 + r )−k > a1 x,
k =1
&
n
X2,k (1 + r )−k > a2 x
k=1
n n
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x .
(3.2)
k=1 j=1
Apparently, (3.2) holds for n = 1. Now we suppose n ≥ 2. Let θ > 1 be a constant, and set z = (θ − 1)/(n − 1). Clearly, z > 0. Thus, we have
P
n
X1,k (1 + r )
−k
> a1 x ,
k=1
≥P =P
n
X2,k (1 + r )
−k
> a2 x
k=1
n
X1,k (1 + r ) k=1 n n n
−k
> a1 x,
n
X2,j (1 + r )
−j
> a2 x, max X1,k (1 + r )
−k
1≤k≤n
j=1
X1,k (1 + r )−k > a1 x, X1,k > θ a1 (1 + r )k x,
k=1 j=1
k=1
n
> θ a1 x, max X2,j (1 + r ) 1≤j≤n
where we specify ∆in , i = 1, 2, 3, 4 as follows:
∆1n =
n n
P
n
k=1 j=1
∆2n =
n k1 ̸=k2 j=1
Xl,k (1 + r )
−k
> al x, l = 1, 2, X1,k > θ a1 (1 + r ) x, X2,j > θ a2 (1 + r ) x , k
k=1
P
n
Xl,k (1 + r )−k > al x, l = 1, 2, X1,k1 > θ a1 (1 + r )k1 x,
k=1
X1,k2 > θ a1 (1 + r ) x, X2,j > θ a2 (1 + r ) x , k2
j
> θ a2 x
X2,j (1 + r )−j > a2 x, X2,j > θ a2 (1 + r )j x
j =1
≥: ∆1n − ∆2n − ∆3n − ∆4n ,
−j
j
1794
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799 n
∆3n =
P
k=1 j1 ̸=j2
n
Xl,k (1 + r )−k > al x, l = 1, 2, X1,k > θ a1 (1 + r )k x,
k=1
X2,j1 > θ a2 (1 + r ) x, X2,j2 > θ a2 (1 + r ) x , j2
j1
∆4n =
P
n
Xl,k (1 + r )−k > al x, l = 1, 2, X1,k1 > θ a1 (1 + r )k1 x,
k=1
k1 ̸=k2 j1 ̸=j2
X1,k2 > θ a1 (1 + r ) x, X2,j1 > θ a2 (1 + r ) x, X2,j2 > θ a2 (1 + r ) x . j2
j1
k2
As for ∆1n , for any z = (θ − 1)/(n − 1), n n
P
n
k=1 j=1
Xl,k (1 + r )
−k
> al x, l = 1, 2, X1,k > θ a1 (1 + r ) x, X2,j > θ a2 (1 + r ) x k
j
k=1
n n
P X1,k > θ a1 (1 + r )k x, X1,s1 > −za1 (1 + r )s1 x, 1 ≤ s1 ̸= k ≤ n,
≥
k=1 j=1
X2,j > θ a2 (1 + r )j x, X2,s2 > −za2 (1 + r )s2 x, 1 ≤ s2 ̸= j ≤ n
P X1,k > θ a1 (1 + r )k x, X2,j > θ a2 (1 + r )j x, X1,j > −za1 (1 + r )j x, X2,k > −za2 (1 + r )k x,
=
k̸=j
X1,s1 > −za1 (1 + r )s1 x, X2,s2 > −za2 (1 + r )s2 x, k, j ̸= s1 , k, j ̸= s2
+
n
P X1,k > θ a1 (1 + r )k x, X2,k > θ a2 (1 + r )k x, X1,s1 > −za1 (1 + r )s1 x,
k=1
X2,s2 > −za2 (1 + r )s2 x, s1 , s2 ̸= k .
(3.3)
By assumption H1, the first sum on the right hand side of (3.3) equals
P X1,k > θ a1 (1 + r )k x, X2,j > θ a2 (1 + r )j x, X1,j > −za1 (1 + r )j x, X2,k > −za2 (1 + r )k x
k̸=j
×
P X1,s1 > −za1 (1 + r )s1 x, X2,s2 > −za2 (1 + r )s2 x ,
k,j̸=s1 ,k,j̸=s2
and the second sum on the right hand side of (3.3) equals n
P X1,k > θ a1 (1 + r )k x, X2,k > θ a2 (1 + r )k x
k=1
P X1,s1 > −za1 (1 + r )s1 x, X2,s2 > −za2 (1 + r )s2 x .
s1 ,s2 ̸=k
It is easy to check that lim P(X1,s1 > −za1 (1 + r )s1 x, X2,s2 > −za2 (1 + r )s2 x) = 1,
x→∞
and for k ̸= j,
P X1,k > θ a1 (1 + r )k x, X2,j > θ a2 (1 + r )j x, X1,j > −za1 (1 + r )j x, X2,k > −za2 (1 + r )k x
= P(X1,k > θ a1 (1 + r )k x, X2,k > −za2 (1 + r )k x)P(X1,j > −za1 (1 + r )j x, X2,j > θ a2 (1 + r )j x) = [P(X1,k > θ a1 (1 + r )k x) − P(X1,k > θ a1 (1 + r )k x, X2,k ≤ −za2 (1 + r )k x)] × d × [P(X2,j > θ a2 (1 + r )j x) − P(X2,j > θ a2 (1 + r )j x, X1,j ≤ −za1 (1 + r )j x)]. Moreover, it follows from limx→∞ Fi (−x)/F i (x) = 0, i = 1, 2, that lim
x→∞
P(X1,k > θ a1 (1 + r )k x, X2,k ≤ −za2 (1 + r )k x) P(X1,k > a1 (1 + r )k x)
≤ lim
x→∞
P(X2,k ≤ −za2 (1 + r )k x) P(X1,k > a1 (1 + r )k x)
= 0,
(3.4)
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
1795
and
P(X2,j > θ a2 (1 + r )j x, X1,j ≤ −za1 (1 + r )j x)
lim
P(X2,j > a2 (1 + r )j x)
x→∞
≤ lim
P(X1,j ≤ −za1 (1 + r )j x)
x→∞
P(X2,j > a2 (1 + r )j x)
= 0.
(3.5)
Furthermore, by assumption H2 and Lemma 2.1, for θ > 1, we can obtain lim inf
−2β θ , ≥ −βγ θ , > a2 (1 + r )j x)
P(X1,k > θ a1 (1 + r )k x, X2,j > θ a2 (1 + r )j x) P(X1,k > a1 (1 + r )k x, X2,j
x→∞
k ̸= j k = j,
(3.6)
and lim sup
P(X1,k > θ a1 (1 + r )k x, X2,j > θ a2 (1 + r )j x) P(X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x)
x→∞
≤
θ −2 α , θ −αγ ,
k ̸= j k = j.
(3.7)
Hence by (3.4)–(3.6), we can get
∆1n
lim lim inf
n n
θ→1 x→∞
≥ 1.
(3.8)
P(X1,k > a1 (1 + r ) , X2,j > a2 (1 + r ) ) kx
jx
k=1 j=1
As for ∆2n , note that
∆2n ≤
n
P X1,k1 > θ a1 (1 + r )k1 x, X1,k2 > θ a1 (1 + r )k2 x, X2,j > θ a2 (1 + r )j x
k1 ̸=k2 j=1
=
P(X1,k1 > θ a1 (1 + r )k1 x)P(X1,k2 > θ a1 (1 + r )k2 x, X2,j > θ a2 (1 + r )j x)
k1 ̸=k2 ,j̸=k1
+
P(X1,k2 > θ a1 (1 + r )k2 x)P(X1,k1 > θ a1 (1 + r )k1 x, X2,j > θ a2 (1 + r )j x).
(3.9)
k1 ̸=k2 ,j=k1
Then by (3.7) and (3.9), we get
∆2n
lim
x→∞
n n
= 0.
(3.10)
= 0.
(3.11)
P(X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x)
k=1 j=1
Similarly, we can get
∆3n
lim
x→∞
n n
P(X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x)
k=1 j=1
Now we turn to ∆4n , noting that
P
n
Xl,k (1 + r )−k > al x, l = 1, 2, X1,k1 > θ a1 (1 + r )k1 x, X1,k2 > θ a1 (1 + r )k2 x,
k=1
X2,j1 > θ a2 (1 + r ) x, X2,j2 > θ a2 (1 + r ) x j1
j2
≤ P(X1,k1 > θ a1 (1 + r )k1 x, X1,k2 > θ a1 (1 + r )k2 x, X2,j1 > θ a2 (1 + r )j1 x). Comparing the computation with ∆2n , still by (3.7), one can immediately obtain lim
x→∞
∆4n n n
= 0.
P(X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x)
k=1 j=1
Combining (3.8), (3.10), (3.11) and (3.12) yields (3.2) which ends the proof of (2.12).
(3.12)
1796
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
Clearly,
ψor (⃗x, n) = P
i
max 0≤i≤n
0≤i≤n
X1,k (1 + r )
> a1 x
X1,k (1 + r )
−k
i
max 0≤i≤n
k =1
i
= P max
−k
0≤i≤n
k=1
k=1
i
> a1 x + P max
X2,k (1 + r )
−k
⃗ > a2 x U0 = ⃗x
X2,j (1 + r )−j > a2 x
− ψand (⃗x, n).
j =1
By Theorem 3.1(a) of Zhang et al. (2009), one can see that
P
i
max 0≤i≤n
Xj,k (1 + r )
−k
> aj x ∼
k=1
n
P(Xj,k > aj (1 + r )k x),
for j = 1, 2.
(3.13)
k=1
On the other hand, n n
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x
k=1 j=1
lim
x→∞
n
n
P(X1,k > a1 (1 + r )k x) +
P(X2,k > a2 (1 + r )k x)
k=1
k=1 n
P X1,k > a1 (1 + r )k x, X2,k > a2 (1 + r )k x +
k=1
= lim
P X1,k > a1 (1 + r )k x P X2,j > a2 (1 + r )j x
k̸=j n
x→∞
P(X1,k > a1 (1 + r )k x) +
k=1
n
P(X2,k > a2 (1 + r )k x)
k=1
n
P X1,k > a1 (1 + r )k x, X2,k > a2 (1 + r )k x
= lim
k=1
x→∞
n
= λ,
(3.14)
[P(X1,k > a1 (1 + r )k x) + P(X2,k > a2 (1 + r )k x)]
k=1
where in the last step, we have used assumption H4. Since 0 ≤ λ < 1, combining (2.12), (3.13) and (3.14), one can immediately get relation (2.13). Proof of Theorem 2.2. For n ≥ 1, by the results of Theorem 2.1, one can get
ψand (⃗x) ≥ P max
i
1≤i≤n
∼
X1,k (1 + r )
−k
> a1 x, max
1≤i≤n
k=1
∞ ∞
∞ ∞
−
k=1 j=1
k=1 j=n+1
−
i
X2,k (1 + r )
−k
> a2 x
k=1
∞ n
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x
k=n+1 j=1
=: I (x) − I1 (x) − I2 (x). By assumption H1, it is easy to see that I2 (x) =
∞ n
P X1,k > a1 (1 + r )k x P X2,j > a2 (1 + r )j x
k=n+1 j=1
. F 1 (a1 x)F 2 (a2 x)
(1 + r )−α(n+2) − (1 + r )−α(2n+2) . (1 − (1 + r )−α )2
Since r > 0, in the above expression, we immediately obtain lim
n→∞
I2 ( x ) F 1 (a1 x)F 2 (a2 x)
= 0.
As for I1 (x), when x tends to infinity, one can get
(1 + r )−α(n+2) − (1 + r )−α(2n+2) (1 + r )−αγ (n+1) + F 1,2 (a1 x, a2 x) −α 2 (1 − (1 + r ) ) 1 − (1 + r )−αγ 2 (1 + r )−α(n+1) (1 + r )−2α(n+1) + F 1 (a1 x)F 2 (a2 x) − −α 1 − (1 + r ) 1 − (1 + r )−2α
I1 (x) . F 1 (a1 x)F 2 (a2 x)
=: I11 (x) + I12 (x) + I13 (x),
(3.15)
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
1797
hence, by letting n → ∞, lim
n→∞
I1j (x)
= 0,
F 1 (a1 x)F 2 (a2 x)
for j = 1, 3 and lim
n→∞
I12 (x) F 1,2 (a1 x, a2 x)
= 0,
and consequently, I1 (x)
lim
= 0.
F 1 (a1 x)F 2 (a2 x) + F 1,2 (a1 x, a2 x)
n→∞
(3.16)
Finally combining (3.15) and (3.16), we obtain ∞ ∞
ψand (⃗x) &
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x .
(3.17)
(3.18)
k=1 j=1
Next, we turn to verify ∞ ∞
ψand (⃗x) .
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x
k=1 j=1
and conclude the proof by combining this and (3.17). Note that for all n = 1, 2, . . ., and i = 1, 2, max
m
1≤m<∞
Xi,k (1 + r )−k ≤
m
max 1≤m≤n
k=1
Xi,k (1 + r )−k +
k=1
∞
−k Xi+ ,k ( 1 + r )
k=n+1
=: Ai,n + Bi,n . Hence, for any constant u such that 0 < u < 1,
ψand (⃗x) ≤ P A1,n > a1 ux, A2,n > a2 ux + P B1,n > a1 (1 − u)x, B2,n > a2 (1 − u)x + P A1,n > a1 ux, B2,n > a2 (1 − u)x + P B1,n > a1 (1 − u)x, A2,n > a2 ux =: J1 (x) + J2 (x) + J3 (x) + J4 (x). It follows from Theorem 2.1 that J1 (x) ∼
n n
P X1,k > a1 u(1 + r )k x, X2,j > a2 u(1 + r )j x .
(3.19)
k=1 j=1
Let 0 < δ < 1 be a constant, N0 = [log(1+r )−δ (1 −(1 + r )−δ )]+ 1, then for all integers n > N0 such that we derive that
J 2 ( x) ≤ P
∞
Xi+ ,k (1
+ r)
−k
>
k=n+1
≤
∞ ∞
∞
∞
k=n+1
(1+r )−δk < 1,
(1 + r )
−δ k
ai (1 − u)x, for i = 1, 2
k=n+1
P X1+,k > a1 (1 − u)(1 + r )(1−δ)k x, X2+,j > a2 (1 − u)(1 + r )(1−δ)j x
k=n+1 j=n+1
. ( 1 − u)
−2β
F 1 (a1 x)F 2 (a2 x)
+ (1 − u)−βγ F 1,2 (a1 x, a2 x)
(1 + r )−α(1−δ)(n+1) 1 − (1 + r )−α(1−δ)
2
(1 + r )−2α(1−δ)(n+1) − 1 − (1 + r )−2α(1−δ)
(1 + r )−αγ (1−δ)(n+1) . 1 − (1 + r )−αγ (1−δ)
(3.20)
Now we turn to estimate J3 (x) and J4 (x). For all integers n > N0 , and 0 < δ < 1, J3 (x) = P(A1,n > a1 ux)P(B2,n > a2 (1 − u)x)
≤ P(A1,n > a1 ux)
∞
P X2,j > a2 (1 − u)(1 + r )(1−δ)j x
j =n +1
. (u(1 − u))−β F 1 (a1 x)F 2 (a2 x)
1 − (1 + r )−α n
(1 +
r )α
−1
·
(1 + r )−α(1−δ)(n+1) , 1 − (1 + r )−α(1−δ)
(3.21)
1798
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
where in the last step, we have used the fact (3.13). Similarly, J4 (x) . (u(1 − u))−β F 1 (a1 x)F 2 (a2 x)
1 − (1 + r )−α n
(1 + r )α − 1 together with (3.19)–(3.21) and letting n → ∞ leads to ψand (⃗x) .
(1 + r )−α(1−δ)(n+1) , 1 − (1 + r )−α(1−δ)
·
∞ ∞
P X1,k > a1 u(1 + r )k x, X2,j > a2 u(1 + r )j x ,
(3.22)
k=1 j=1
and further letting u → 1 in (3.22) leads to (3.18), which completes the proof of (2.14). Note that n
P X1,k > a1 (1 + r )k x, X2,k > a2 (1 + r )k x
k=1
n
∞
+
k=1
[P(X1,k > a1 (1 + r )k x) + P(X2,k > a2 (1 + r )k x)]
k=n+1
∞
P X1,k > a1 (1 + r )k x, X2,k > a2 (1 + r )k x
≤
k=1
∞
[P(X1,k > a1 (1 + r )k x) + P(X2,k > a2 (1 + r )k x)] n ∞ + P X1,k > a1 (1 + r )k x, X2,k > a2 (1 + r )k x
k=1
≤
k=1
k=n+1
n
.
[P(X1,k > a1 (1 + r )k x) + P(X2,k > a2 (1 + r )k x)]
k=1
After simple calculation, we can get ∞
lim lim
[P(X1,k > a1 (1 + r )k x) + P(X2,k > a2 (1 + r )k x)]
k=n+1 n
n→∞ x→∞
= 0,
[P(X1,k > a1 (1 + r )k x) + P(X2,k > a2 (1 + r )k x)]
k=1
and ∞ ∞
P X1,k > a1 (1 + r )k x P X2,j > a2 (1 + r )j x
lim
k=1 j=1
x→∞ ∞
= 0,
[P(X1,k > a1 (1 + r )k x) + P(X2,k > a2 (1 + r )k x)]
k=1
and using assumption H4 and Lemma 2.1, one can obtain that ∞
lim lim
n→∞ x→∞
P X1,k > a1 (1 + r )k x, X2,k > a2 (1 + r )k x
k=n+1 n
= 0.
[P(X1,k > a1 (1 + r )k x) + P(X2,k > a2 (1 + r )k x)]
k=1
Hence, by assumption H4, it is easy to see that ∞ ∞
P X1,k > a1 (1 + r )k x, X2,j > a2 (1 + r )j x
lim
k =1 j =1
x→∞ ∞
= λ.
(3.23)
[P(X1,k > a1 (1 + r ) ) + P(X2,k > a2 (1 + r ) )] kx
kx
k=1
Similarly to the derivation of (2.13), combining (3.23), Theorem 3.1(b) of Zhang et al. (2009) and (2.14), we can conclude (2.15), which completes the proof. Acknowledgments The authors are grateful to the anonymous referees and the editor for their constructive comments and suggestions, which have improved the presentation of the paper.
X. Shen, Y. Zhang / Statistics and Probability Letters 83 (2013) 1787–1799
1799
References Albrecher, H., Asmussen, S., Kortschak, D., 2006. Tail asymptotics for the sum of two heavy-tailed dependent risks. Extremes 9, 107–130. Cai, J., Li, H., 2005. Multivariate risk model of phase type. Insurance: Mathematics & Economics 36 (2), 137–152. Cai, J., Li, H., 2007. Dependence properties and bounds for ruin probabilities in multivariate compound risk models. Journal of Multivariate Analysis 98, 757–773. Chan, W.-S., Yang, H., Zhang, L., 2003. Some results on ruin probabilities in a two-dimensional risk model. Insurance: Mathematics & Economics 32 (3), 345–358. Embrechts, P., Kluppelberg, C., Mikosch, T., 1997. Modelling Extremal Events for Insurance and Finance. Springer, Berlin. Li, J., Liu, Z., Tang, Q., 2007. On the ruin probabilities of a bidimensional perturbed risk model. Insurance: Mathematics & Economics 41, 185–195. Resnick, S.I., 2007. Heavy-Tail Phenomena: Probabilistic and Statistical Modeling. Springer-Verlag, New York. Zhang, Y., Shen, X.M., Weng, C.G., 2009. Approximation of the tail probability of randomly weighted sums and applications. Stochastic Processes and their Applications 119, 655–675.