Interval estimation for a measure of tail dependence

Interval estimation for a measure of tail dependence

Insurance: Mathematics and Economics 64 (2015) 294–305 Contents lists available at ScienceDirect Insurance: Mathematics and Economics journal homepa...

550KB Sizes 0 Downloads 38 Views

Insurance: Mathematics and Economics 64 (2015) 294–305

Contents lists available at ScienceDirect

Insurance: Mathematics and Economics journal homepage: www.elsevier.com/locate/ime

Interval estimation for a measure of tail dependence Aiai Liu a , Yanxi Hou b , Liang Peng c,∗ a

School of Mathematics, Tongji University, China

b

School of Mathematics, Georgia Institute of Technology, USA

c

Department of Risk Management and Insurance, Georgia State University, USA

article

info

Article history: Received April 2015 Received in revised form May 2015 Accepted 30 May 2015 Available online 29 June 2015 Keywords: Conditional Kendall’s tau Interval estimation Jackknife empirical likelihood Tail dependence Extreme events

abstract Systemic risk concerns extreme co-movement of several financial variables, which involves characterizing tail dependence. The coefficient of tail dependence was proposed by Ledford and Tawn (1996, 1997) to distinguish asymptotic independence and asymptotic dependence. Recently a new measure based on the conditional Kendall’s tau was proposed by Asimit et al. (2015) to measure the tail dependence and to distinguish asymptotic independence and asymptotic dependence. For effectively constructing a confidence interval for this new measure, this paper proposes a smooth jackknife empirical likelihood method, which does not need to estimate any additional quantities such as asymptotic variance. A simulation study shows that the proposed method has a good finite sample performance. © 2015 Elsevier B.V. All rights reserved.

1. Introduction

and V = 1 − F2 (Y ), then the distribution of (U , V ) is a survival copula given by C (u, v) = P(1 − F1 (X ) ≤ u, 1 − F2 (Y ) ≤ v).

A recent research interest in risk management focuses on systemic risk in banking industry and insurance companies. Systemic risk concerns extreme co-movements of key financial variables. Effectively measuring tail dependence plays an important role in understanding and managing systemic risk. See Allen et al. (2012) for measuring systemic risk and using the measure to predict future economic downturns; Chen et al. (2013) for a connection of systemic risk between banks and insurers; an excellent review on systemic risk is given by Bisias et al. (2012). Extreme co-movement usually requires measuring tail dependence of several variables. Tail dependence has been studied in the context of multivariate extreme value theory for decades. Since such a measure focuses on a far tail region of the underlying distribution, statistical inference is quite challenging due to the lack of observations. Therefore, it is always desirable to find a better measure or some competitive measures and to have an efficient inference procedure. Suppose (X , Y ) is a random vector with joint distribution F and continuous marginal distributions F1 and F2 . Define U = 1 − F1 (X )



Corresponding author. E-mail address: [email protected] (L. Peng).

http://dx.doi.org/10.1016/j.insmatheco.2015.05.014 0167-6687/© 2015 Elsevier B.V. All rights reserved.

(1.1)

In order to predict an extreme co-movement of financial market, it is useful to investigate the behavior of the so-called tail copula defined as limt →0 t −1 C (tu, t v), which can be employed to extrapolate data into a far tail region; see Haug et al. (2011) for an overview. When the limit is not identically zero (i.e., asymptotic dependence), one can predict rare events via estimating this limiting function. On the other hand, if the limit is identically zero (i.e., asymptotic independence), then some additional conditions are needed for predicting extreme events. To effectively distinguishing these two cases, Ledford and Tawn (1996, 1997) introduced the so-called coefficient of tail dependence η ∈ (0, 1] by assuming that C (t , t ) = t 1/η s(t ), where s(t ) is a slowly varying function, i.e., limt →0 s(tx)/s(t ) = 1 for all x > 0. Therefore, η and the limit of s(t ) can be used to distinguish asymptotic dependence (i.e., η = 1& limt →0 s(t ) > 0) and asymptotic independence (i.e., η < 1 or η = 1& limt →0 s(t ) = 0). Statistical inference for η is available in Dutang et al. (2014), Draisma et al. (2004), Goegebeur and Guillou (2012) and Peng (1999). Although copula gives a complete description of dependence among variables, having some summary measures for dependence is useful in practice. Some commonly used ones include correlation coefficient, Spearman’s rho and Kendall’s tau. Similarly, tail copula determines the tail dependence completely, but the coefficient of tail dependence η gives a useful measure of tail dependence. Since

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

Kendall’s tau is invariant to marginals and has been popular in risk management, one may wonder whether Kendall’s tau can be modified to give a simple and effective measure of tail dependence as well. Recently, when the survival copula C (u, v) is a bivariate regular variation, i.e., H (u, v) = limt →0 C (tu, t v)/C (t , t ) exists and is finite for u, v ≥ 0, Asimit et al. (to appear) investigated the limit of the conditional Kendall’s tau (i.e., θ = limu→0 E{sgn((U1 −  V1 1 − V2 ))| max(U1 , U2 , V1 , V2 ) ≤ u}), found that θ = 4 U21)( H (x, y)dH (x, y) − 1 and showed that θ is positive for a 0 0 subclass of asymptotic dependence such as elliptical tail copulas and nonpositive for a subclass of asymptotic independence such as normal copulas. Due to its ease of implementation, elliptical copulas and elliptical tail copulas have been employed in risk management; see McNeil et al. (2005). The study of tails of mixture of elliptical copulas is available in Manner and Segers (2011). A new method for constructing copulas with tail dependence is given by Li et al. (2014). Since the above measure θ involves the function H rather than some particular values of H as in η, one may expect that θ could be more effective statistically than η in distinguishing asymptotic behavior and measuring tail dependence. For interval estimation of θ , one can estimate the complicated asymptotic variance of the proposed nonparametric estimator in Asimit et al. (to appear). In order to avoid estimating the asymptotic variance, a naive bootstrap method can be employed to construct a confidence interval, which generally performs badly in finite sample. Alternatively empirical likelihood methods have been proved to be quite effective in interval estimation and hypothesis test, which requires no estimation for any additional quantities. We refer to Owen (2001) for an overview on empirical likelihood methods. In this paper we investigate the possibility of employing an empirical likelihood method to construct a confidence interval for the limit of the conditional Kendall’s tau. We organize this paper as follows. Section 2 presents the new methodology and theoretical results. A simulation study and real data analysis on Danish fire losses are given in Section 3. All proofs are put in Section 4.

295

one may employ the empirical likelihood method based on estimating equations in Qin and Lawless (1994) to the above equation. Unfortunately such a direct application fails to achieve a chisquared limit due to the involved U-statistic and the plug-in estimators for Ui′ s and Vi′ s. Recently a so-called jackknife empirical likelihood method is proposed by Jing et al. (2009) to construct confidence intervals for non-linear functions including U-statistics. However, due to the involved indicator function, a direct application of the jackknife empirical likelihood function fails again to have the Wilks theorem. In order to catch the contribution made by the plug-in empirical distributions, we propose to employ the smooth jackknife empirical likelihood method proposed by Peng and Qi (2010) for constructing confidence intervals for a tail copula. More specifically, for l1 , l2 = 1, . . . , n, define

 n  1 (l ) (l ) (l )   I (Xj ≤ x), Uˆ l2 1 = 1 − Fˆ1 1 (Xl2 ), Fˆ1 1 (x) =   n − 1  j=1,j̸=l1     n    (l ) (l ) Fˆ (l1 ) (x) = 1  I (Yj ≤ x), Vˆ l2 1 = 1 − Fˆ2 1 (Yl2 ),  2  n − 1  j=1,j̸=l1          2    Tˆn (θ ) = sgn (Uˆ i − Uˆ j )(Vˆ i − Vˆ j ) − θ   n(n − 1) 1≤i
h

h

h

   2   Tˆn(l1 ) (θ ) =   (n − 1)(n − 2)       (l ) (l ) (l ) (l )   × sgn (Uˆ i 1 − Uˆ j 1 )(Vˆ i 1 − Vˆ j 1 ) − θ    1≤i
x

where G(x) = −∞ g (y) dy and g is a symmetric smooth density function with support [−1, 1] and h = h(n) > 0 is a bandwidth. Therefore a jackknife sample is defined as

2. Methodology and theoretical results Throughout we assume observations (X1 , Y1 ), . . . , (Xn , Yn ) are independent and identically distributed with distribution function F and continuous marginals F1 and F2 . For the study of asymptotic tail behavior of F , Asimit et al. (to appear) considered the limit of the conditional Kendall’s tau, i.e., θ = limu→0 E{sgn((U1 − U2 )(V1 − V2 ))| max(U1 , U2 , V1 , V2 ) ≤ u}. A simple nonparametric estimator for θ is to replace the conditional expectation by its sample conditional mean, which leads to

 ˆ k) = 1≤i
sgn((Uˆ i − Uˆ j )(Vˆ i − Vˆ j ))I (max(Uˆ i , Uˆ j , Vˆ i , Vˆ j ) ≤ k/n)

 1≤i
I (max(Uˆ i , Uˆ j , Vˆ i , Vˆ j ) ≤ k/n)

,

Zˆi (θ ) = nTˆn (θ ) − (n − 1)Tˆn(i) (θ ) for i = 1, . . . , n. Note that, in order to take care of the contributions from Uˆ i′ s and ′

on this jackknife sample, a smooth jackknife empirical likelihood function for θ is obtained as L(θ ) = max

n



{sgn((Uˆ i − Uˆ j )(Vˆ i − Vˆ j )) − θ}

1≤i
× I (max(Uˆ i , Uˆ j , Vˆ i , Vˆ j ) ≤ k/n) = 0,

 n 

(npi ) : p1 ≥ 0, . . . , pn ≥ 0

n 

i=1 n 

n where Uˆ i = 1−Fˆ1 (Xi ), Vˆ i = 1−Fˆ2 (Yi ), Fˆ1 (x) = n−1 i=1 I (Xi ≤ x),

Fˆ2 (y) = n−1 i=1 I (Yi ≤ y), k = k(n) → ∞ and k/n → 0 as n → ∞. Under some conditions, Asimit et al. (to appear) derived the asymptotic limit of θˆ (k), which has a complicated asymptotic variance. Here we investigate the possibility of employing an empirical likelihood method to construct a confidence interval without estimating the asymptotic variance explicitly. By noting that ˆ k) is a solution to the following equation θ(

1− nk max{Uˆ i ,Vˆ i ,Uˆ j ,Vˆ j }

) h ˆ instead of the product of G s in the above definition of Tn (θ ). Based

Vˆ i′ s in proving Wilks theorem, we do not use G(

pi = 1,

i=1

 pi Zˆi (θ ) = 0 .

(2.1)

i=1

It follows from the Lagrange multiplier technique that l(θ ) := −2 log L(θ ) = 2

n 





log 1 + λZˆi (θ ) ,

(2.2)

i=1

where λ = λ(θ ) satisfies n 

Zˆi (θ )

i =1

1 + λZˆi (θ )

= 0.

In order to show that Wilks theorem holds for the above smooth jackknife empirical likelihood method, we need some regularity

296

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

conditions. As usual in extreme value theory, we need a second order regular variation to control the bias in θˆ (k). (A1) There exist a regular variation A(t ) → 0 as t → 0 with index ρ¯ ≥ 0, functions Q (u, v) and q(u, v) such that lim

C (tu,t v) C (t ,t )

A(t )

t →0

lim

− H (u, v)

t 2 C12 (tu,t v) C (t ,t )

= Q (u, v) and

− H12 (u, v)

A(t )

t →0

= q(u, v)

for all (u, v) ∈ [0, 1]2 and uniformly on {(u, v) : u2 +v 2 = 1},

where H12 (u, v) = ∂ ∂u∂v H (u, v) and C12 (u, v) = ∂ ∂u∂v C (u, v); (A2) k → ∞, nk C ( nk , nk ) → c0 ∈ [0, 1], nC ( nk , nk ) → 2

2

∞, {nC ( nk , nk )}1/2 A( nk ) → 0 as n → ∞; (A3) h → 0, kh → ∞, nC ( nk , nk )h2 → ∞, nC ( nk , nk )h4 → 

0,

nC ( nk , nk ) kh

→ 0 as n → ∞.

Remark 2.1. Conditions (A1) and (A2) are employed to derive n ˆ the asymptotic normality of 3 1k k i=1 Zi (θ0 ) with mean zero, nC ( n , n )

which appear in Asimit et al. (to appear). Condition (A3) is similar to the conditions imposed on bandwidth in Peng and Qi (2010) for the study of tail  copulas by noting that the rate of convergence in this paper is

nC 3 ( nk , nk ) rather than



k in Peng and Qi (2010).

Theorem 2.1. Under conditions (A1)–(A3), l(θ0 ) converges in distribution to a chi-squared limit with one degree of freedom as n → ∞, where θ0 is the true value of θ , i.e., the true limit of the conditional Kendall’s tau.

Table 1 Coverage probabilities are computed for the bootstrap method based on the smoothing estimator θ˜ (i.e., solving Tˆn (θ) = 0) and the empirical likelihood method n for k = 50, 100, 150, and h = δ{ i=1 I (Uˆ i ≤ nk , Vˆ i ≤ nk )}−1/3 with δ = 0.5, 1, 1.5. We take ρ = 0.5 for both normal copula and elliptical distribution.

(k, δ)

Distribution

α

(50, 0.5) (50, 1.0) (50, 1.5) (50, 0.5) (50, 1.0) (50, 1.5) (50, 0.5) (50, 1.0) (50, 1.5) (100, 0.5) (100, 1.0) (100, 1.5) (100, 0.5) (100, 1.0) (100, 1.5) (100, 0.5) (100, 1.0) (100, 1.5) (150, 0.5) (150, 1.0) (150, 1.5) (150, 0.5) (150, 1.0) (150, 1.5) (150, 0.5) (150, 1.0) (150, 1.5)

Normal copula Normal copula Normal copula

I0B.90

I0B.95

I0EL.90

I0LE.95

0.853 0.869 0.854

0.911 0.920 0.917

0.889 0.907 0.903

0.954 0.970 0.964

Elliptical Elliptical Elliptical

1 1 1

0.887 0.874 0.885

0.921 0.925 0.924

0.894 0.893 0.898

0.931 0.939 0.939

Elliptical Elliptical Elliptical

5 5 5

0.822 0.829 0.832

0.880 0.892 0.888

0.893 0.905 0.898

0.953 0.959 0.956

0.823 0.824 0.816

0.881 0.882 0.884

0.815 0.809 0.801

0.889 0.897 0.883

Normal copula Normal copula Normal copula Elliptical Elliptical Elliptical

1 1 1

0.882 0.879 0.872

0.939 0.936 0.921

0.889 0.889 0.883

0.938 0.944 0.941

Elliptical Elliptical Elliptical

5 5 5

0.874 0.872 0.873

0.923 0.926 0.925

0.906 0.911 0.911

0.942 0.953 0.952

0.742 0.721 0.693

0.819 0.798 0.776

0.720 0.701 0.673

0.823 0.800 0.777

Normal copula Normal copula Normal copula Elliptical Elliptical Elliptical

1 1 1

0.888 0.890 0.882

0.933 0.937 0.936

0.895 0.898 0.888

0.944 0.947 0.944

Elliptical Elliptical Elliptical

5 5 5

0.885 0.887 0.889

0.942 0.937 0.934

0.919 0.919 0.920

0.954 0.962 0.954

Based on the above limit, a confidence interval for θ0 with level γ is IγEL = {θ : l(θ ) ≤ χ1,γ }, where χ1,γ is the γ -th quantile of a chi-

squared distribution with one degree of freedom. In the simulation study below, we provide a way to choose h, which is less important than choosing k in general. As usual in extreme value theory, it is always challenging to choose k. We plan to investigate the issue of data-driven methods for choosing k in the future. 3. Simulation study and data analysis First we examine the finite sample behavior of the proposed jackknife empirical likelihood method in terms of coverage accuracy and compare it with the normal approximation method. Draw 1000 random samples of size n = 1000 from a normal copula with correlation ρ and the elliptical random variable RAU, where R > 0 is a random variable with distribution P (R > x) = x−α forsome α > 0, A is a deterministic 2 × 2 matrix with AAT = 1

ρ

, U is uniformly distributed on {z = (z1 , z2 )T : z T z = 1} and independent of R. Hence the true θ for normal copula is zero. A formula for computing the true θ for the above elliptical distriρ

1

bution is given in Asimit et al. (to appear). Motivated by the choice of bandwidth in smoothing distribun tion function estimation, we choose h = δ{ i=1 I (Uˆ i ≤ nk , Vˆ i ≤ k −1/3 n 2 2

15 )} with δ = 0.5, 1, 1.5. We employ the kernel g (x) = 16 (1 − x ) I (|x| ≤ 1) and consider k = 50, 100, 150, 200. For computing the confidence interval based on the smooth estimator θ˜ which solves Tˆn (θ ) = 0, we use the bootstrap method by drawing 1000

resamples. Denote this interval by IγB .

Coverage probabilities for the bootstrap method (IγB ) and the empirical likelihood method (IγEL ) with level γ = 0.9 and 0.95 are reported in Table 1, which shows that (i) the proposed empirical likelihood method performs better than the normal approximation

Fig. 1. Danish fire losses.

method in most cases; (ii) both methods are less sensitive to the choice of bandwidth; (iii) the proposed empirical likelihood method is more robust with respect to the choice of k for elliptical distributions. Next, we consider the non-zero losses to building and content in the Danish fire insurance claims; see Fig. 1. This data set is available at www.ma.hw.ac.uk/~mcneil/, which comprises 2167 fire losses over the period 1980–1990. Fig. 1 shows that there are some huge losses to both content and building, but a few simultaneous large losses to both variables, which may suggest a weak tail dependence.

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

297

Table 2 Confidence intervals with level 0.9 and 0.95 are computed for the bootstrap method based on the smoothing estimator θ˜ (i.e., solving n Tˆn (θ) = 0) and the empirical likelihood method for k = 60, 70, 80, 90, 100, and h = δ{ i=1 I (Uˆ i ≤ nk , Vˆ i ≤ nk )}−1/3 with δ = 0.5, 1, 1.5.

(k, δ)

θ˜

I0B.90

I0B.95

I0EL.90

I0LE.95

(60, 0.5) (60, 1.0) (60, 1.5)

0.005 0.013 0.036

(−0.332, 0.263) (−0.313, 0.245) (−0.275, 0.241)

(−0.419, 0.303) (−0.379, 0.290) (−0.319, 0.304)

(−0.35, 0.37) (−0.37, 0.30) (−0.23, 0.30)

(−0.45, 0.48) (−0.32, 0.37) (−0.29, 0.37)

(70, 0.5) (70, 1.0) (70, 1.5)

−0.012 0.014 0.040

(−0.335, 0.179) (−0.276, 0.199) (−0.232, 0.253)

(−0.383, 0.236) (−0.327, 0.237) (−0.261, 0.297)

(−0.26, 0.21) (−0.23, 0.24) (−0.20, 0.27)

(−0.31, 0.26) (−0.28, 0.30) (−0.25, 0.33)

(80, 0.5) (80, 1.0) (80, 1.5)

0.057 0.070 0.79

(−0.218, 0.265) (−0.169, 0.254) (−0.147, 0.262)

(−0.254, 0.315) (−0.212, 0.310) (−0.179, 0.298)

(−0.24, 0.31) (−0.16, 0.29) (−0.12, 0.29)

(−0.30, 0.37) (−0.21, 0.34) (−0.16, 0.34)

(90, 0.5) (90, 1.0) (90, 1.5)

0.148 0.120 0.104

(−0.044, 0.372) (−0.101, 0.323) (−0.097, 0.288)

(−0.089, 0.404) (−0.131, 0.362) (−0.124, 0.316)

(0.00, 0.40) (−0.05, 0.33) (−0.07, 0.31)

(−0.05, 0.44) (−0.09, 0.37) (−0.10, 0.35)

(100, 0.5) (100, 1.0) (100, 1.5)

0.138 0.127 0.106

(−0.046, 0.320) (−0.058, 0.312) (−0.064, 0.266)

(−0.089, 0.363) (−0.081, 0.354) (−0.092, 0.306)

(−0.03, 0.31) (−0.02, 0.32) (−0.05, 0.30)

(−0.07, 0.34) (−0.06, 0.36) (−0.08, 0.33)

In Table 2 we report confidence intervals IγB and IγEL with level γ = 0.9 and 0.95. As above, the bootstrap confidence interval is based on the smooth estimator θ˜ and 1000 bootstrap resamples, and the same kernel function G and the choice of h are employed. For computing the empirical likelihood based confidence interval, we calculate the empirical likelihood ratio function for θ from −0.6 to 0.6 with a step 0.01. Table 2 shows that the empirical likelihood confidence intervals are slightly more skewed to the right than the normal approximation confidence intervals. The results for k = 90 and 100 may prefer a positive θ than nonnegative one, which may indicate a weak tail dependence as suggested by Fig. 1. The above computation was carried out on supercomputers with an easy access to more than 1000 computer nodes. To better gain an idea on the computational time of the proposed jackknife empirical likelihood method, we rerun the last case in Table 1 with one sample on a MAC with a 1.4 GHz intel Core i5 processor. We use the R command proc.time to record both user time and system time, which are 437.541 s and 17.809 s, respectively in computing the jackknife empirical likelihood ratio l(θ0 ).

Lemma 4.1. Under conditions (A1)–(A3), we have n



nC 3

= 

n 

, n n

 k

k



k

n 1

k i=1

 



1

c0 4 0



√ + ×

k

n 1

k i=1

 



h0 (Ui , Vi )

i=1

 I

Ui ≤

k





−1

n

 H12 (1, v)H (1, v) dv − 2(1 + θ0 )H1 (1, 1)    I

Vi ≤

k

−1

n

1

H12 (u, 1)H (u, 1) du

c0 4 0

 − 2(1 + θ0 )H2 (1, 1) + op (1)

Throughout we define

:= Wn1 + Wn2 + Wn3 + op (1),

Blm = sgn((Uˆ l − Uˆ m )(Vˆ l − Vˆ m )) − θ0 = sgn((Ul − Um )(Vl − Vm )) − θ0 ,



1 − nk x h



,

Gh (x) = G



1 − nk x h

n

,



ˆ (i)

ˆ (i)

ˆ (i)

2 n(n−1) (i) 1≤l


1≤l
Alm

(i) and Tˆn (θ0 ) =

2

(n−1)(n−2)

The proof of Theorem 2.1 follows from the standard procedure in proving Wilks theorem for empirical likelihood methods. That is, we shall prove (i) the asymptotic normality of n ˆ  1 i=1 Zi (θ0 ); (ii) the convergence in probability of nC 3 ( nk , nk )

n

i=1

Zˆi2 (θ0 ); and (iii) derive the bound of max1≤i≤n |Zˆi (θ0 )|.

Before showing (i), we derive the asymptotic normality of  n Tˆn (θ0 ), which is a smoothing version of the estimator in nC 3 ( nk , nk )

Asimit et al. (to appear).

, n

k

k n

h0 (u, v) =

ˆ (i)

Alm = Blm Gh (Ul )Gh (Um )Gh (Vl )Gh (Vm ). Hence Tˆn (θ0 ) = 

nC 3

d ˆ → N (0, σ 2 ),  Tn (θ0 ) −

(4.2)

where

Alm = Blm Gh (Uˆ l )Gh (Uˆ m )Gh (Vˆ l )Gh (Vˆ m ), (i)

(4.1)

and



A¯ lm = Blm Gh (Ul )Gh (Um )Gh (Vl )Gh (Vm ),

1 nC 3 ( nk , nk )

nC 3

+ ×

ˆ  Tn (θ0 )

2



4. Proofs

gh (x) = g

,k n n

k



4C (u, v) − 2C

+ (1 − θ0 )C





u,

k k

,

n n

k





− 2C n   I

k n

,v



max(u, v) ≤

k n



,

σ 2 = 4σ12 + σ22 + σ32 + 2c0 σ2 σ3 ,  1 1  2  σ = {4H (x, y) − 2H (x, 1) − 2H (1, y)  1   0 0   2    + (1 − θ0 )} dH (x, y),  1 √ σ = c 4 H ( 1 , v) H ( 1 , v) d v − 2 ( 1 + θ ) H ( 1 , 1 ) ,  2 0 12 0 1   0     1     σ = √c 4 H12 (u, 1)H (u, 1) du − 2(1 + θ0 )H2 (1, 1) . 3 0 0

298

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305



Proof. Put

= 4G

 θn = E {(sgn ((U1 − U2 )(V1 − V2 )) − θ0 )    × Gh (U1 )Gh (V1 )Gh (U2 )Gh (V2 )} ,     h˜ (u1 , v1 , u2 , v2 ) = {sgn ((u1 − u2 )(v1 − v2 )) − θ0 }     ×Gh (u1 )Gh (v1 )Gh (u2 )Gh (v2 ) − θn ,    ˜   h1 (u1 , v1 ) = E {(sgn ((u1 − U2 )(v1 − V2 )) − θ0 )   ×Gh (u1 )Gh (v1 )Gh (U2 )Gh (V2 )} − θn ,    n  S1n = h˜ 1 (Ui , Vi ),    i = 1      S2n = {h˜ (Ul , Vl , Um , Vm ) − h˜ 1 (Ul , Vl ) − h˜ 1 (Um , Vm )},     1≤l
2 n

2

S1n +

n( n − 1 )

S2n .



g (t )C

 − 2G

g (t )C

×

+ (1 − θ0 )G  × [−1,1]2

(1 − t2 h)

4







g (ti )

[−1,1]4 i=1

4 

[0,1]4

sgn((x1 − x2 ) 4 

dti



h˜ 1 (u1 , v1 ) = 4E I (U2 < u1 , V2 < v1 )G 1 − v1 n k

  G

h



1 − nk U2 h



1 − nk u1

E h˜ 21 (U1 , V1 ) C3



n



1 − nk v1



h k

(1 − t1 h) , n

dt1 dt2 − θn

n



k k

,

 

n n

I

max(u1 , v1 ) ≤

k



n

− θn .

(4.5)

,n

n

1



 → k

k

0

1



{4H (x, y) − 2H (x, 1) − 2H (1, y) 0

+ (1 − θ0 )}2 dH (x, y)

(4.6)

,k n n

k



n U k 2

1−

 

  n

=

2

G



1 − nk u1

  G

h

  G

1 − nk V2

1−



h 1 − nk v1

n3 C 3

(4.8)

h 1 − nk V2

G



h 1 − nk v1 h

1 − nk U2

1 C2

G

  EGh (U1 )Gh (V1 )Gh (U2 )Gh (V2 ) = k k n (1−t1 h)



k n (1−t2 h)



× [−1,1]2



+ (1 − θ0 )E G

h

(4.9)

, n n

k

  

1 − nk U2

 = op (1)



h

 

→ 0, which implies that

h



h

 

,k n n

k

Eg22

nC 3 ( nk , nk )

by using (4.8). Note that



h

1 − nk v1

  G

(4.7)

Eg22 ,

S2n

 n V k 2

→ 1 − θ02 .

where g2 = h˜ (U1 , V1 , U2 , V2 ) − h˜ 1 (U1 , V1 ) − h˜ 1 (U2 , V2 ). From (4.6)

   1 − nk u1 − 2E I (V2 < v1 )G

×G

G

Similar to the proof of (4.4), we have



h

h

− 2E I (U2 < u1 )G



 

h

+ (1 − θ0 )C

and (4.7), we have

and

×G

1 − nk u1

n

2 ES2n

(4.4)



n



It is derived on pp. 178 and 184 of Serfling (1980) that

i=1

×G

 k (1 − th) , v1 dt

  k = (1 + O(h)) 4C (u1 , v1 )I max(u1 , v2 ) ≤ n     k k I max(u1 , v1 ) ≤ − 2C u1 , n n     k k − 2C , v1 I max(u1 , v1 ) ≤

C2

= 0,







E h˜ 2 (U1 , V1 , U2 , V2 )

dti

× (y1 − y2 ) − θ0 )dH (x1 , y1 )dH (x2 , y2 )

×G

dt

n

and







h

g (t1 )g (t2 )C k

k

1 − nk v1

G

−1

i=1

×G





h

 

h 1



C (u1 , v1 )

1 − nk v1

G

u1 , (1 − th)

1 − nk u1



h

−1

[0,(1−ti h) nk ]

× dC (u1 , v1 )dC (u2 , v2 )





1 − nk v1

 

h

×

It is straightforward to check that

i=1

G

1 − nk u1

1



(4.3)

 θn 1 k k = k k {sgn((u1 − u2 ) C2 n , n C2 n , n [0,1]4 × (v1 − v2 )) − θ0 }Gh (u1 )Gh (v1 ) × Gh (u2 )Gh (v2 )dC (u1 , v1 )dC (u2 , v2 )  4  1 g (ti ) = 2 k k C n, n [−1,1]4 i=1  × 4 {sgn((u1 − u2 )(v1 − v2 )) − θ0 }

 

h

− 2G

Since T˜n (θ0 ) is a U-statistic, from the Hoeffding decomposition (see Hoeffding, 1948 or Lemma A from p. 178 of Serfling, 1980), we have T˜n (θ) =

1 − nk u1

1 − nk u1

C

k n

, nk

h



 − θn

n

, nk

 2

g (t1 )g (t2 )dC (u1 , v1 )dt1 dt2

0



h 1 − nk V2

k

1

=



0

1 C

(1−t1 h)



(1−t2 h)



× [−1,1]2

0

0

g (t1 )g (t2 )dC



k n

x,

k n

 y dt1 dt2

2 .

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

+ Blm Gh (Ul )Gh (Um )gh (Vl )Gh (Vm )

Assumption (A1) leads to



1 A

k

1

n

+ Blm Gh (Ul )Gh (Um )Gh (Vl )gh (Vm )

 ,k n n 



(1−t1 h)

(1−t2 h)



× [−1,1]2

0

g (t1 )g (t2 )dC



k n

0

x,



k n

y dt1 dt2



 − [−1,1]2



g (t1 )g (t2 )H (1 − t1 h, 1 − t2 h)dt1 dt2

g (t1 )g (t2 )

= [−1,1]2

(1−t1 h)

(1−t2 h)



× 0

0

C





− H (x, y)   k  dt1 dt2  A n





C nk , nk

  d 

kh

kh

+ Blm gh (ξ1 (i, l))gh (ξ2 (i, m))Gh (Um )Gh (Vl )  n 2 × (Gn1 (Ul ) − Ul )(Gn2 (Vm ) − Vm ) kh

g (t1 )g (t2 )H (1 − t1 h, 1 − t2 h)dt1 dt2 = H (1, 1) + O(h2 ).

+ Blm gh (ξ1 (i, m))gh (ξ2 (i, l))Gh (Ul )Gh (Vm )  n 2 (Gn1 (Um ) − Um )(Gn2 (Vl ) − Vl ) ×

Thus,



1 C

k n

,

k n



(1−t1 h)



[−1,1]2

(1−t2 h)



0

kh



g (t1 )g (t2 )dC

= 1 + O(h2 ) + O A

k

n

k n

0

  

x,

k n

+ Blm gh (ξ1 (i, m))gh (ξ2 (i, m))Gh (Ul )Gh (Vl )  n 2 × (Gn1 (Um ) − Um )(Gn2 (Vm ) − Vm )

 y dt1 dt2

kh

,

+ Blm gh (ξ2 (i, l))gh (ξ2 (i, m))Gh (Ul )Gh (Um )  n 2 × (Gn2 (Vl ) − Vl )(Gn2 (Vm ) − Vm )

which implies that

kh

2θ0



n(n − 1)C 2

,k n n

k



Gh (U1 )Gh (V1 )Gh (U2 )Gh (V2 )

:= DA1lm + DA2lm + DA3lm + DA4lm +

1≤l
k

= θ0 + Op (h2 ) + Op A

k

n

.

(4.10)

n



Similar to the proof of (4.10), we can show that

   θn k  k k  = O(h2 ) + O A . 2 n C n, n

nC 3

(4.11)

=

n

,k n n

k

n



n

,

k n



n

,n

 k

h˜ 1 (Ui , Vi ) +

i =1

2

(n − 1)



nC 3

k n

,

k n

 S2n

h0 (Ui , Vi ) + op (1).

= (4.12)

i =1

Denote Gn1 (x) =



1 n(n − 1)C 2

n n



k n

, nk



Blm h−1 gh (Ul )Gh (Vl )Gh (Um )Gh (Vm )





n 

k

,

C

×

k

k k

DA1lm

1≤l
l̸=m

n 

2

nC 3

×



, n n

k

˜  Tn (θ0 )

2

nC 3

2

 k n(n − 1)

k



Therefore it follows from (4.3), (4.5), (4.9) and (4.11) that

= 

(4.13)

where ξ1 (i, k), k = l, m is between Gn1 (Uk ) and Uk , and ξ2 (i, k) is between Gn2 (Vk ) and Vk . It is straightforward to show that

n

  

= 

DBklm ,

k=1

= θ0 1 + Op (h2 ) + Op A

nC 3

10 

  2





kh

+ Blm gh (ξ1 (i, l))gh (ξ2 (i, l))Gh (Um )Gh (Vm )  n 2 (Gn1 (Ul ) − Ul )(Gn2 (Vl ) − Vl ) ×

and by the Taylor expansion and the symmetry of g, we have

[−1,1]2

kh

+ Blm gh (ξ1 (i, l))gh (ξ1 (i, m))Gh (Vl )Gh (Vm )  n 2 × (Gn1 (Ul ) − Ul )(Gn1 (Um ) − Um )



k k n x, n y

→ Q (1, 1), 

(Gn2 (Vl ) − Vl )

2

 

kh n

(Gn2 (Vm ) − Vm ) n 2 1 + Blm gh′ (ξ1 (i, l))Gh (Um )Gh (Vl )Gh (Vm ) (Gn1 (Ul ) − Ul ) 2 kh n 2 1 ′ + Blm Gh (Ul )gh (ξ1 (i, m))Gh (Vl )Gh (Vm ) (Gn1 (Um ) − Um ) 2 kh n 2 1 ′ (Gn2 (Vl ) − Vl ) + Blm Gh (Ul )Gh (Um )gh (ξ2 (i, l))Gh (Vm ) 2 kh n 2 1 + Blm Gh (Ul )Gh (Um )Gh (Vl )gh′ (ξ2 (i, m)) (Gn2 (Vm ) − Vm )

k

C

299

n

1 n

k

k i=1

i=1

I (Ui ≤ x) and Gn2 (y) =

1 n

n

i=1

I ( Vi ≤

ˆ n1 (Ui ) and Vˆ i = Gn2 (Vi ) for i = 1, . . . , n. By the y). Then Uˆ i = G Taylor expansion, we have Alm − A¯ lm = Blm gh (Ul )Gh (Um )Gh (Vl )Gh (Vm )

+ Blm Gh (Ul )gh (Um )Gh (Vl )Gh (Vm )

n kh

n kh

(Gn1 (Ul ) − Ul )

(Gn1 (Um ) − Um )



√ √ c0

k

1



I (Ui ≤ Ul ) −

n 1

k i =1 1





× 0

n

n 1

0

 I

Ui ≤

k n

n k



 Ul

 −1

1

{sgn(v1 − v2 ) − θ0 }

0

× H12 (1, v1 )H12 (u2 , v2 ) dv1 du2 dv2 + op (1)     n √ √ 1 k −1 = c0 k I Ui ≤ k i =1

1

 

n



H12 (1, v)H (1, v) dv − (1 + θ0 )H1 (1, 1)

× 2 0

+ op (1),

(4.14)

300

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

n



nC 3

=

2

, n

k

k n

√ √ c0



 n(n − 1) 

k

 n 1 k i=1

1

  × 2

for j = 1, 2, 3, 4, and

DA2lm

1≤l
I

Ui ≤

n

k







−1

n

nC 3

2

n



nC 3

, n

k

0

+ op (1), n



nC 3

=

2

k n



 k n(n − 1)

,n

√ √ c0

 k

1≤l
n

 1 k i=1

I

k

Vi ≤

−1

n

1

 

H12 (u, 1)H (u, 1) du − (1 + θ0 )

× 2 0

× H2 (1, 1)} + op (1),  n 2  DA4lm  k k  n(n − 1) 1≤l
(4.16)

n

1

 



H12 (u, 1)H (u, 1) du − (1 + θ0 )H2 (1, 1)

× 2 0

+ op (1).

(4.17)

Set C0 = max(1, maxx∈[−1,1] g (x), maxx∈[−1,1] |g (x)|, 1 − θ0 , 1 + θ0 , |θ0 |). Since g is a smooth density function with support [−1, 1], we have   n n   1 − nk Uˆ l |gh′ (ξ1 (i, l))| ≤ C0 I −1 ≤ ≤1 ′

l=1

h

l=1

≤ C0

n 

 −1 ≤

I

h

 n  I

Fˆ1−

 1−

l=1

k n

≤1

nC 3

2

, n n 

  k 1 1 − (1 + h) +

n

n

4σ12 Σ = 0 0

n

= Op  

k

nC 3

n

,

k n

 n2 khnC



,

n n

(4.20)

σ

0



c0 σ2 σ3  ,

2 2

c0 σ2 σ3

σ32



Lemma 4.2. Under condition (A1)–(A3), we have

n

n 

1



nC 3

, n

k

k n



d

Zˆi (θ0 ) − → N (0, σ 2 )

(4.21)

i=1

as n → ∞, where σ 2 = limn→∞ E (Wn1 + Wn2 + Wn3 )2 is given in (4.2).

|DBjlm |

k k

 = op (1)

0

which implies (4.2).

(4.18)



,



Consequently, it follows from the Cramér-device that



1≤l
1

n

n

k n

as n → ∞, where

n





nC

k

   k k   I ( U ≤ ) −  h0 (Ui , Vi ) i  n  E  √ n     k / n  2  Eh0 (U1 , V1 )         θ − kθ  n k k   = n  n n C , {1 + o(1)} → 0,   k n n  σ1 C nk , nk        k  k   h (U , V ) I (Vi ≤ n ) − n   E  0 i i √   k/n Eh20 (U1 , V1 )        θn − nk θn n  k k     C , {1 + o(1)} → 0, =  k k  k n n    σ1 C n , n      I (Ui ≤ nk ) − nk I (Ui ≤ nk ) − nk    E √ √   k/n k/n         k k k 2  C , −  n n n  = → c0 . k/n

By (4.18), we can show that

 k n(n − 1)



d

≤ 2C0 (1 + kh).

k

k

+ op (1) − → N (0, Σ )

− Fˆ1 Fˆ1− n n   k 1 k 1 ≤ C0 n 1 − (1 − h) + − 1 + (1 + h) +

n

c0

 (1 + h)

n



1

T n  2σ1 h0 (Ui , Vi )  (Wn1 , Wn2 , Wn3 )T =  √ , Wn2 , Wn3  n i=1 Eh2 (U , V ) 1 1 0

  k 1 ≤ Xl < Fˆ1− 1 − (1 − h) + n n    −  k 1 − ˆ ˆ ≤ C0 n F1 F1 1 − ( 1 − h ) + 

,





ˆ

1−

l=1

≤ C0

n U k l

k k n n



for j = 5, 6, 7, 8, 9, 10. Hence (4.1) follows from (4.12)–(4.17), (4.19) and (4.20). Further we have





2





= Op  

DA3lm

C

k n



(4.15)

DBjlm

1≤l
= Op  

 H12 (1, v)H (1, v) dv − (1 + θ0 )H1 (1, 1)



 k n(n − 1)

, n n 

k



Proof. According to the definition, we have

 1 kh2

n 

1

 

nC 3

k n

,

k n



Zˆi (θ0 )

i=1



= Op  

1

h nC

k n

, nk



 = op (1)

(4.19)

= 

n nC 3

k n

, nk

ˆ  Tn (θ0 ) + 

n−1 nC 3

k n

, nk

n  [Tˆn (θ0 ) − Tˆn(i) (θ0 )].  i=1

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

− Gh (Uˆ l(i) )Gh (Uˆ m(i) )Gh (Vˆ l(i) )Gh (Vˆ m(i) )}

By Lemma 4.1, to obtain (4.21), it is sufficient to prove n−1



nC 3

k n

, nk

n  [Tˆn (θ0 ) − Tˆn(i) (θ0 )] = op (1). 

= Blm gh (Uˆ l )Gh (Uˆ m )Gh (Vˆ l )Gh (Vˆ m )

(4.22)

(i)

+ Blm Gh (Uˆ l )Gh (Uˆ m )gh (Vˆ l )Gh (Vˆ m )

Note that Alm = Aml , which allows us to write Tˆn(i) (θ0 ) =

=



2

(i)



(n − 1)(n − 2)

Alm −

2



(i)



(n − 1)(n − 2)

(i)

Aim −

m>i

1≤l


Alm −

n 

1≤l


(i)



+ Blm Gh (Uˆ l )Gh (Uˆ m )Gh (Vˆ l )gh (Vˆ m )

Ali

l
(i)

(i)

Ali + Aii

.

l =1

2



(Vˆ m − Vˆ m(i) )

kh

1

+ Blm Gh (Uˆ l )Gh (Uˆ m )gh′ (ξ2 (i, l))Gh (Vˆ m ) 2 2 n (Vˆ l − Vˆ l(i) ) × kh

1

+ Blm Gh (Uˆ l )Gh (Uˆ m )Gh (Vˆ l )gh′ (ξˆ2 (i, m)) 2 n 2 × (Vˆ m − Vˆ m(i) )

(4.23)

kh

and

+ Blm gh (ξ1 (i, l))gh (ξ1 (i, m))Gh (Vˆ l )Gh (Vˆ m )  n 2 (Uˆ l − Uˆ l(i) )(Uˆ m − Uˆ m(i) ) ×

n  [Tˆn (θ0 ) − Tˆn(i) (θ0 )]

kh

i=1 n

2

 

(n − 1)(n − 2) −

+



+

(n − 1)(n − 2) (n − 1)(n − 2)

 n  n 

(n − 1)(n − 2) (n − 1)(n − 2)

(n − 1)(n − 2) 2

(n − 1)(n − 2)

(i)

kh



n  n 

+ Blm gh (ξ1 (i, m))gh (ξ2 (i, l))Gh (Uˆ l )Gh (Vˆ m )  n 2 × (Uˆ m − Uˆ m(i) )(Vˆ l − Vˆ l(i) ) kh

+ Blm gh (ξ1 (i, m))gh (ξ2 (i, m))Gh (Uˆ l )Gh (Vˆ l )  n 2 × (Uˆ m − Uˆ m(i) )(Vˆ m − Vˆ m(i) )

(i)



Ali

kh

+ Blm gh (ξ2 (i, l))gh (ξ2 (i, m))Gh (Uˆ l )Gh (Uˆ m )  n 2 × (Vˆ l − Vˆ l(i) )(Vˆ m − Vˆ m(i) )

i=1 l=1 n 

(i)



Aii

kh

i =1

:= DA1lm,i + DA2lm,i + DA3lm,i + DA4lm,i

(i) [Alm − Alm ]

+

10 

DBklm,i ,

(4.25)

k=1

i=1 l=1

(i) where ξ1 (i, k), k = l, m, is between Uˆ k and Uˆ k , and ξ2 (i, k) is

n  [Aii − Aii(i) ]

between Vˆ k and Vˆ k . Since

(i)

i=1

˜ 2i + D

n 

(i)

˜ 3i . D

i=1

By the Taylor expansion, we have (i)

Ali

l=1

n  n  [Ali − Ali(i) ]

n

i=1

+ Blm gh (ξ1 (i, l))gh (ξ2 (i, m))Gh (Uˆ m )Gh (Vˆ l )  n 2 × (Uˆ l − Uˆ l(i) )(Vˆ m − Vˆ m(i) )

i=1 1≤l
2

i =1

Ali −

Aii −

 



kh



(i) ] [Alm − Alm

i =1 n

˜ 1i − D

(i)

Ajj − Aii

l =1 i =1

 n 

2

n

Alj −

n 

i=1 1≤l
2



n j =1

i=1

n  

(n − 1)(n − 2)

+

n l =1 j =1

i=1

 n n  1

2

2



(i) ] [Alm − Alm

 n n n  1 

(n − 1)(n − 2)

+ Blm gh (ξ1 (i, l))gh (ξ2 (i, l))Gh (Uˆ m )Gh (Vˆ m )  n 2 (Uˆ l − Uˆ l(i) )(Vˆ l − Vˆ l(i) ) ×

i=1 1≤l
2

2

:=

kh

(Vˆ l − Vˆ l(i) )

+ Blm Gh (Uˆ l )gh′ (ξ1 (i, m))Gh (Vˆ l )Gh (Vˆ m ) 2 n 2 × (Uˆ m − Uˆ m(i) )

:= D1i − D2i + D3i

=

kh n

kh

(i) [Alm − Alm ] (n − 1)(n − 2) 1≤l
=

kh n

1

Tˆn (θ0 ) − Tˆn(i) (θ0 )

=

(Uˆ l − Uˆ l(i) )

1 + Blm gh′ (ξ1 (i, l))Gh (Uˆ m )Gh (Vˆ l )Gh (Vˆ m ) 2 2 n (Uˆ l − Uˆ l(i) ) ×



Thus,

=

n kh

n + Blm Gh (Uˆ l )gh (Uˆ m )Gh (Vˆ l )Gh (Vˆ m ) (Uˆ m − Uˆ m(i) )

i=1

(i)

301

Alm − Alm = Blm {Gh (Uˆ l )Gh (Uˆ m )Gh (Vˆ l )Gh (Vˆ m )

(4.24)

Uˆ l − Uˆ l

=

1 n−1

I (Ui < Ul ) −

we have n  (Uˆ l − Uˆ l(i) ) = 0. i =1

1 n−1

Uˆ l ,

302

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

for j = 1, 2, 3, 4, and

Therefore, n 

˜ 1i = D

i =1

n 10   

2

(n − 1)(n − 2)

n−1

DBklm,i .

(4.26)

i=1 1≤l
Note that

|Uˆ l − Uˆ l(i) |



nC 3

, n n  

= Op C

  ≤  ≤

1

(1 + h)k

n−1 1

n

k k

,

×

if Uˆ i < Uˆ l ,

n−1

 n 

Uˆ l < (1 + h)

I

i =1



I Uˆ i < Uˆ l

n

 n 

Uˆ i < (1 + h)

I

i =1

k



n

n

k(1 + h) = O(k).

(4.28)

1

nC 3

,n  n

= Op  

n nC 3

i=1 1≤l
1

 2 khnC k n , n n

×

k k

,



n

2



2 2

k h

k2 h2

n n

k

 h nC

1

k

, n

k

n

k n

n + 

h nC

n4

n+

1

,k n n

k

 , k n(n − 1) n n 

k

n

nC 3

k n

1

,

k n



n2

khnC

n2

k 



 = op (1)



nC 3

,k n n

k



k k

,

n n



n

2

k2 h2



2 2

k h n4

n+

1 n2

 k

|D˜ 1i | → 0

n

n  

 , nk (n − 1)(n − 2) i=1 1≤l
k

n

2

n

1

(n − 1)(n − 2) kh n − 1  

= Op  

1

kh

 n nC n , nk k

k

n−1

(2C0 kh)2

 = op (1)

|DBjli,i |

i=1 1≤l
 (n − 1)(n − 2)

C02 (2C0 kh)2



2

1 n−1

3 1 nC

,k n n

k

n



,n

 k



 → 0,

p

|D˜ 2i | → 0.

(4.30)

i=1

|DAjli,i |

n  

2

k

n

2

(n − 1)(n − 2) kh  

= Op  

2

2

|DAjii,i |  k (n − 1)(n − 2) i=1 1≤l
(4.29)

(i)

k n

n−1



×

p

1

n nC

(i)

1 ˆ n ˆ Note that Uˆ i − Uˆ i = − n− U and i=1 (Uˆ i − Uˆ i ) = − n− U. 1 i 1 i Similar to the above derivations, we have

nC 3

nC 3

i =1

n−1

1

Further we can show that

by using (4.26).



n

= 8C04  

DBjlm

for j = 5, . . . , 10, which imply that





i.e.,

= op (1)

n−1

(2C0 kh)2

2

,

k

 n 2 

i=1 1≤l




n  

n−1

n−1 n  

, nk

 = op (1)

 , nk (n − 1)(n − 2)

kh



2

= Op

n





for j = 1, 2, 3, 4, and

nC 3

k



= Op  



nC 3

DBjlm





2

nC 3

 

 k n(n − 1)

k

,k n n

k

n−1



≤ 

n

2

n

n

for j = 5, 6, 7, 8, 9, 10, which imply that

It follows from (4.27) and (4.28) that



k

(n − 1)(n − 2) kh  



n+1



nC 3

 n 2

n nC

  k

|DBjli,i |

i=1 1≤l
n−1



2

= Op  

and



n n

if Uˆ i ≥ Uˆ l , (4.27)

n  

2

 k (n − 1)(n − 2)

k

,k n n

k



(2C0 kh)

1 n−1

 = op (1)

for j = 1, 2, 3, 4, n−1

n  

2

|DBjii,i |  , nk (n − 1)(n − 2) i=1 1≤l


nC 3

k

n

= OP 

1



nkh nC

k n

,

k n



 = op (1)

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

for j = 1, 2, 3, 4, and n−1



nC 3

k n

≤ 

It follows from (4.23) and (4.25) that n  

2

 k (n − 1)(n − 2)

,n

|DBjii,i |

D1i =

i=1 1≤l
 n 2 1 2 (2C0 kh)  kh ( n − 1)2 k (n − 1)(n − 2) 3 nC n , n  3 1

1



kh

nC

k n

,n

 k

n−1 nC 3



,k n n

k



p

|D˜ 3i | → 0.

(4.31)

i=1



Lemma 4.3. Under conditions (A1)–(A3), we have n 

1 nC 3

, n

k n



P

Zˆi2 (θ0 ) − → σ2

(4.32)

i =1

as n → ∞, and

 max |Zˆi (θ0 )| = op

nC 3

1≤i≤n



k k

,



n n

.

(4.33)

n 

1 nC 3

=

n

,n

 k k

k n

 Tˆn2 (θ0 ) +

2(n − 1)

,k n n

k

 Tˆn (θ0 )

i=1

+

n (n − 1)2  k k [Tˆn (θ0 ) − Tˆn(i) (θ0 )]2 . nC 3 n , n i=1

n 

1 nC 3

=

, n

k n



Zˆi2 (θ0 )

i =1

=

Therefore, to prove (4.32), we only need to show that, as n → ∞ n (n − 1)2  p k k [Tˆn (θ0 ) − Tˆn(i) (θ0 )]2 − → σ 2. 3 nC n , n i=1

×

, nk

 i=1

D1i {D3i − D3i }.

n2

k

k 2 h2

n2

(1 + op (1)) 2

1

  = c0 2

H (1, v)H12 (1, v) dv − (1 + θ0 )H1 (1, 1)

+ op (1).

0

n n (n − 1)2  (n − 1)2  k k = 3 k k D21i + {D3i − D2i }2 nC n , n i=1 nC 3 n , n i=1

n

(4.37)

× Bl′ m′ gh (Uˆ l′ )Gh (Uˆ m′ )Gh (Vˆ l′ )Gh (Vˆ m′ )

(n − 1)2  ˆ   [Tn (θ0 ) − Tˆn(i) (θ0 )]2 nC 3 nk , nk i=1

k

| ≤ 1 uniformly for all

l̸=m l′ ̸=m′

n

nC 3

h

(n − 1)2 1   nC 3 nk , nk n2 (n − 1)2  × Blm gh (Uˆ l )Gh (Uˆ m )Gh (Vˆ l )Gh (Vˆ m )

(4.34)

By (4.23), we have

n 2(n − 1)2 

1− nk Uˆ m

 2 n  (n − 1)2  2   DA1lm nC 3 nk , nk i=1 n(n − 1) 1≤l
n (n − 1)2  k k [Tˆn (θ0 ) − Tˆn(i) (θ0 )]2 + op (1). nC 3 n , n i=1

+

(4.36)

where ϵn ’s are constants and ϵn → 0. First look at the first term in I1 . Using (4.37), we have

From Lemma 4.1 and (4.22), we conclude that

k

:= I1 + I2 + I3 .

  n 1   (Uˆ l − Uˆ l(i) )(Uˆ l′ − Uˆ l(′i) )   n   i =1    Uˆ l ∧ Uˆ l′ Uˆ l Uˆ l′ k    = − = 3 (1 + o(ϵn )), 2 (n − 1) (n − 1)2 n n   1   (Uˆ l − Uˆ l(i) )(Vˆ l′ − Vˆ l(′ i) )   n   i =1   1   = [Fˆ (Xl , Yl′ ) − Fˆ1 (Xl )Fˆ2 (Yl′ )], (n − 1)2

Zˆi2 (θ0 )

, nC 3 n n  × [Tˆn (θ0 ) − Tˆn(i) (θ0 )] nC 3

DBklm,i .

 2 n 4   2 (n − 1)2  DAklm,i = 3 k k nC n , n i=1 (n − 1)(n − 2) 1≤l
1− n Uˆ l

i =1

n

(n − 1)(n − 2) 1≤l
Using the fact that | hk | ≤ 1 and | l, m = 1, . . . , n, we have

Proof. According to the definition, we have

k

10  

2

n (n − 1)2  k k D21i nC 3 n , n i=1

Hence, (4.22) follows from (4.29)–(4.31), i.e., the lemma holds.

k

DAklm,i

Thus,

 →0

for j = 5, 6, 7, 8, 9, 10, which imply that



(n − 1)(n − 2) 1≤l
k

4  

2

n−1

= 4C0

303

Similarly we can show the convergence for other terms in I1 , I2 , I3 , which lead to (4.35)

p

I1 → lim E (Wn2 + Wn3 )2 , n→∞

p

I2 → 0,

p

I3 → 0,

304

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305

i.e.,

From Lemmas 4.2 and 4.3, we have

(n − 1)2  2 p   D1i → lim E {Wn2 + Wn3 }2 n→∞ nC 3 nk , nk i=1



n

(4.38)

by using (4.36). It is straightforward to verify that



n n  2 (n − 1)2  (i) k k Ali nC 3 n , n i=1 n(n − 1) l=1

=

n  n  m 

4

,k n n

k

n3 C 3



|λ| = Op  

nC 3

,k n n

k

.

(4.41)

max |γi | = op (1).

(4.42)

1≤i≤n

Note that

Bli Bmi Gh (Ul )Gh (Vl )

n 1  Zˆi (θ0 )

0 = g (λ) =

i=1 l=1 m=1

=

n 1

n i=1

n i=1 1 + γi

Zˆi (θ0 ) −

n λ ˆ2

n 1

n i=1

n i=1

Zi (θ0 ) +

Zˆi (θ0 )

γi2 1 + γi

1 γi2 = Z¯n (θ0 ) − λSn + . Zˆi (θ0 ) n i=1 1 + γi n

× {sgn((u2 − u3 )(v2 − v3 )) − θ0 } × dH (u1 , v1 )dH (u2 , v2 )dH (u3 , v3 )  1 1 {4H (u3 , v3 ) − 2H (u3 , 1) =4

It follows from (4.32), (4.33) and (4.42) that

0

n 1

− 2H (1, v3 ) + 1 − θ0 }2 dH (u3 , v3 ) 2 = lim EWn1 .

n i=1

Zˆi (θ0 )

n→∞

n γi2 1 2 1 Zˆi (θ0 )λ2 max Zˆi (θ0 ) ≤ 1≤i≤n 1 + γi n i =1 1 + γi  

Further we can show that

= op  

n (n − 1)2  k k {D3i − D2i }2 nC 3 n , n i=1 2  n n  2 (n − 1)2  (i) Ali + op (1) = 3 k k nC n , n i=1 n(n − 1) l=1

1 nC 3

k n

,

k n



Therefore, λSn = Z¯n (θ0 ) + op

. 

 1



nC 3 ( nk , nk )

. By Lemmas 4.2 and 4.3,

we have

2 = lim EWn1 + op (1)

(4.39)

n→∞

l(θ0 ) = 2

n 

log(1 + λZˆi (θ0 ))

i =1

and

=2

n (n − 1)2  k k D1i {D3i − D2i } 3 nC n , n i=1

(4.40)

n→∞

Hence, (4.34) follows from (4.35), (4.38)–(4.40), i.e., (4.32) follows. Eq. (4.33) can be derived similarly by examining all terms as above and we skip details.  Proof of Theorem 2.1. Denote Z¯n (θ0 ) =

ˆ2 i=1 Zi (θ0 ), and g (λ) =

1 n

n

Zˆi (θ0 )

i=1 1+λZˆ (θ ) . i 0

1 n

n

i=1

1≤i≤n

which implies that

n n

i=1

1 + |λ| max |Zˆi (θ0 )| 1≤i≤n

Zˆi (θ0 ), Sn =

Then we have

  n n 1    λ Zˆi2 (θ0 )   0 = |g (λ)| =  Zˆi (θ0 ) −   n i=1 n i=1 1 + λZˆi (θ0 )      n   n λ   Zˆi2 (θ0 )   1    ≥ Zˆi (θ0 ) −  n i=1 1 + λZˆi (θ0 )   n i=1   n    2    |λ|  1n Zˆi (θ0 ) n 1     i =1 ≥ − Zˆi (θ0 ) ,  n i =1  1 + |λ| max |Zˆi (θ0 )|

  n    Zˆi2 (θ0 ) |λ|  3 1k k  nC ,

λZˆi (θ0 ) −

  n    1   ˆ ≤  3 k k Zi (θ0 ) .  nC n , n i=1 

=

nC 3 nk , nk

1



nC 3 nk , nk

as n → ∞.

n 

λ2 Zˆi2 (θ0 ) + op (1)

i =1



1

p

n

n  i =1

→ lim E {Wn1 (Wn2 + Wn3 )}.

1 n



Let γi = λZˆi (θ0 ), then by (4.33) and (4.41), we have

2

× Gh (Ui )Gh (Vi )Gh (Um )Gh (Vm ){1 + op (1)}   p → 4 · · · {sgn((u1 − u3 )(v1 − v3 )) − θ0 }

0

 1

n 

2

Zˆi (θ0 )

i=1



n 

Zˆi2 (θ0 )

+ op (1) → χ12

i =1



Acknowledgments The authors thank a reviewer for his/her helpful comments. Peng’s research was partly supported by Simons Foundation. References Allen, L., Bali, T.G., Tang, Y., 2012. Does systemic risk in the financial sector predict future economic downturns? Rev. Financ. Stud. 25, 3000–3036. Asimit, A., Gerrard, R., Hou, Y., Peng, L., 2015. Tail dependence measure for modeling financial extreme co-movements. J. Econometrics to appear. Bisias, D., Flood, M., Lo, A.W., Valavanis, S., 2012. A survey of systemic risk analytics. Office of Financial Research, Working Paper #0001. Chen, H., Cummins, J.D., Viswanathan, K.S., Weiss, M.A., 2013. Systemic risk and the interconnectedness between banks and insurers: an econometric analysis. J. Risk Insurance 81, 623–652. Draisma, G., Drees, H., Ferreira, A., de Haan, L., 2004. Bivariate tail estimation: dependence in asymptotic independence. Bernoulli 10, 251–280. Dutang, C., Goegebeur, Y., Guillou, A., 2014. Robust and bias-corrected estimation of the coefficient of tail dependence. Insurance Math. Econom. 57, 46–57. Goegebeur, Y., Guillou, A., 2012. Asymptotically unbiased estimation of the coefficient of tail dependence. Scand. J. Stat. 40, 174–189. Haug, S., Klüppelberg, C., Peng, L., 2011. Statistical models and methods for dependence in insurance data. J. Korean Statist. Soc. 40, 125–139.

A. Liu et al. / Insurance: Mathematics and Economics 64 (2015) 294–305 Hoeffding, W., 1948. A class of statistics with asymptotically normal distribution. Ann. Math. Stat. 19, 293–325. Jing, B.-Y., Yuan, J.Q., Zhou, W., 2009. Jackknife empirical likelihood. J. Amer. Statist. Assoc. 104, 1124–1232. Ledford, A.W., Tawn, J., 1996. Statistics for near independence in multivariate extreme values. Biometrika 83, 169–187. Ledford, A.W., Tawn, J., 1997. Modelling dependence within joint tail regions. J. R. Stat. Soc. Ser. B Stat. Methodol. 59, 475–499. Li, L., Yuen, K.C., Yang, J., 2014. Distorted mix method for constructing copulas with tail dependence. Insurance Math. Econom. 57, 77–89. Manner, H., Segers, J., 2011. Tails of correlation mixtures of elliptical copulas. Insurance Math. Econom. 48, 153–160.

305

McNeil, A.J., Frey, R., Embrechts, P., 2005. Quantitative Risk Management: Concepts, Techniques, Tools. Princeton University Press. Owen, A.B., 2001. Empirical Likelihood. Chapman & Hall. Peng, L., 1999. Estimation of the coefficient of tail dependence in bivariate extremes. Statist. Probab. Lett. 43, 399–409. Peng, L., Qi, Y., 2010. Smoothed jackknife empirical likelihood method for tail copulas. TEST 19, 514–536. Qin, J., Lawless, J., 1994. Empirical likelihood and general estimating equations. Ann. Statist. 22, 300–325. Serfling, R.J., 1980. Approximation Theorems of Mathematical Statistics. John Wiley & Sons, Inc..