Pitfalls in using Weibull tailed distributions

Pitfalls in using Weibull tailed distributions

ARTICLE IN PRESS Journal of Statistical Planning and Inference 140 (2010) 2018–2024 Contents lists available at ScienceDirect Journal of Statistical...

218KB Sizes 0 Downloads 74 Views

ARTICLE IN PRESS Journal of Statistical Planning and Inference 140 (2010) 2018–2024

Contents lists available at ScienceDirect

Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi

Pitfalls in using Weibull tailed distributions Alexandru V. Asimit a, Deyuan Li b,, Liang Peng c a

School of Mathematics, The University of Manchester, UK School of Management, Fudan University, Room 736, Siyuan Building, 670 Guoshun Road, 200433 Shanghai, PR China c School of Mathematics, Georgia Institute of Technology, USA b

a r t i c l e i n f o

abstract

Article history: Received 11 August 2009 Received in revised form 23 January 2010 Accepted 26 January 2010 Available online 4 February 2010

By assuming that the underlying distribution belongs to the domain of attraction of an extreme value distribution, one can extrapolate the data to a far tail region so that a rare event can be predicted. However, when the distribution is in the domain of attraction of a Gumbel distribution, the extrapolation is quite limited generally in comparison with a heavy tailed distribution. In view of this drawback, a Weibull tailed distribution has been studied recently. Some methods for choosing the sample fraction in estimating the Weibull tail coefficient and some bias reduction estimators have been proposed in the literature. In this paper, we show that the theoretical optimal sample fraction does not exist and a bias reduction estimator does not always produce a smaller mean squared error than a biased estimator. These are different from using a heavy tailed distribution. Further we propose a refined class of Weibull tailed distributions which are more useful in estimating high quantiles and extreme tail probabilities. & 2010 Elsevier B.V. All rights reserved.

Keywords: Asymptotic mean squared error Extreme tail probability High quantile Regular variation Weibull tail coefficient

1. Introduction Suppose X1 ; . . . ; Xn are independent and identically distributed random variables with distribution function F, which has a Weibull tail coefficient y. That is, 1FðxÞ ¼ expfHðxÞg

with H ðxÞ ¼ infft : HðtÞ Z xÞg ¼ xy lðxÞ;

ð1:1Þ

where l(x) is a slowly varying function at infinity, i.e., lim lðtxÞ=lðtÞ ¼ 1

t-1

for all x 4 0:

This class of distributions includes some well-known light tailed distributions such as Weibull, Gaussian, gamma and logistic. Due to the applications of these distributions in insurance, estimating y has attracted much attention recently. Accurate estimate of the probabilities associated with the extreme events contributes to a good understanding of the risk taken by the insurance company. In addition, estimates of certain risk measures can be obtained, such as the Value-at-Risk, which is a quantile function. This may be quite useful for risk management purposes, as it allows one to determine high quantiles of the insurance company losses and therefore enables one to obtain capital amounts that will be adequate with high probability. There exist various estimators for y in the literature; see Beirlant et al. (2006), Gardes and Girard (2008), and Girard (2004). A comparison study is given in Gardes and Girard (2006). Since the condition (1.1) is made asymptotically, each of  Corresponding author.

E-mail address: [email protected] (D. Li). 0378-3758/$ - see front matter & 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2010.01.039

ARTICLE IN PRESS A.V. Asimit et al. / Journal of Statistical Planning and Inference 140 (2010) 2018–2024

2019

these proposed estimators for y can only involve a fraction of upper order statistics. How to choose this fraction plays an important role in practice. Motivated by similar studies on estimating extreme value index in Matthys and Beirlant (2003) and Mattys et al. (2004), Diebolt et al. (2008a, b) proposed ways to choose the optimal fraction in estimating both y and high quantiles of F. Moreover some bias reduction estimators for both y and high quantiles are proposed in Diebolt et al. (2008a, b) and Dierckx et al. (2009). It is known that there exists a theoretical optimal choice of the sample fraction in estimating the tail index of a heavy tailed distribution when the second order regular variation index is negative. In addition, a bias reduced estimator for the tail index produces a smaller order of asymptotic mean squared error than the corresponding biased tail index estimator theoretically. Since the estimation for the Weibull tail coefficient is partly motivated by the similar study in estimating the tail index of a heavy tailed distribution, one may conjecture that the bias reduction for estimating y is always better. Although the above mentioned papers are in favor of bias-reduction estimation for y, we show that bias reduction estimation is not always better in the sense of asymptotic mean squared error and the choice of sample fraction for a bias reduction estimator of y becomes practically difficult. That is, a bias reduction estimator for y is not particularly useful both theoretically and practically. These observations are in contrast to the case of tail index estimation. Finally, we propose a refined class of Weibull tailed distributions which are more useful in estimating high quantiles and extreme tail distributions. We organize this paper as follows. Section 2 presents our main findings. A simulation study is given in Section 3. Some conclusions are given in Section 4. 2. Main results Before giving our statements, we list some known estimators for y and their asymptotic results. Suppose X1 ; . . . ; Xn are independent and identically distributed random variables with distribution function F. Let Xn;1 r    r Xn;n denote the order statistics of X1 ; . . . ; Xn . Throughout we assume that F satisfies (1.1). Here we focus on the following estimators studied in Diebolt et al. (2008a, b) and Dierckx et al. (2009): P k1 ki ¼ 1 logðXn;ni þ 1 =Xn;nk Þ ; y^ H ðkÞ ¼ P k1 ki ¼ 1 log logððn þ 1Þ=iÞlog logððn þ 1Þ=ðkþ 1ÞÞ

y^ R;1 ðkÞ ¼ k1

k X

i logðn=iÞlogðXn;ni þ 1 =Xn;ni Þ;

i¼1

y^ R;2 ðkÞ ¼ k1

k X

Pk

j¼1

i logðn=iÞlogðXn;ni þ 1 =Xn;ni Þ

i¼1

ðaj aÞj logðn=jÞlogðXn;nj þ 1 =Xn;nj Þ a; Pk 2 j ¼ 1 ðaj aÞ

P where aj ¼ ðlogðn=jÞ=logðn=kÞÞ1 , a ¼ k1 kj ¼ 1 aj , and ( )1 Pk ^ n;nj Þ=mðX ^ n;nk1 Þg logfmðX y^ M ðkÞ ¼ 1 j ¼P1 k ; j ¼ 1 logðXn;nj =Xn;nk1 Þ P ^ n;nk Þ ¼ k1 ki ¼ 1 Xn;ni þ 1 Xn;nk . The estimator y^ H ðkÞ was first proposed by Beirlant et al. (1996) and its where mðX asymptotic limit was derived in Girard (2004). The estimators y^ R;2 ðkÞ and y^ M ðkÞ are bias-reduced estimators for y in the pffiffiffi sense that the asymptotic bias is negligible when kbðlogðn=kÞÞ-l 2 ð1; 1Þ, where b is defined in (2.1) below. Here we want to compare these two bias-reduced estimators with the possibly biased estimators y^ ðkÞ and y^ ðkÞ in terms of H

R;1

asymptotic mean squared errors. In order to derive the asymptotic limits of the above estimators, one needs the following stricter condition than (1.1): there exist r r 0 and bðxÞ-0 (as x-1Þ such that lim b1 ðxÞlog

x-1

lðxyÞ yr 1 ¼ r lðxÞ

for all y 40:

ð2:1Þ

From now on we assume that (1.1) and (2.1) hold and k ¼ kðnÞ-1 and k=n-0 as n-1. Result 1 (Theorem 1 of Gardes and Girard, 2008). If k1=2 bðlog nÞ-l 2 ð1; 1Þ

and

k1=2 =log n-0;

ð2:2Þ

then pffiffiffi d 2 kfy^ H ðkÞyg-Nðl; y Þ: Result 2 (Theorem 2.2 of Diebolt et al., 2008a). If jkbðkÞj-1;

k1=2 bðlogðn=kÞÞ-l 2 ð1; 1Þ

and

log k=log n-0

when l ¼ 0;

ð2:3Þ

ARTICLE IN PRESS 2020

A.V. Asimit et al. / Journal of Statistical Planning and Inference 140 (2010) 2018–2024

then 9 8 k = X pffiffiffi< d  r 2 -Nð0; y Þ: aj k y^ R;1 ðkÞybðlogðn=kÞÞk1 ; : j¼1

Result 3 (Theorem 3.1 of Diebolt et al., 2008a). If pffiffiffi k jkbðkÞj-1; bðlogðn=kÞÞ-L 2 ð1; 1Þ logðn=kÞ and pffiffiffi 2 k log k -1 -0 and logðn=kÞ logðn=kÞ

when L ¼ 0;

ð2:4Þ

then pffiffiffi k d 2 fy^ R;2 ðkÞyg-Nð0; y Þ: logðn=kÞ Result 4 (Theorem 2.3 of Dierckx et al., 2009). If xr jbðxÞj is a normalized slowly varying function and k1=2 =logðn=kÞ-1

and

2

log k=log n-0;

ð2:5Þ

then ) pffiffiffi ( 2 k d 2 ^y ðkÞyð1 þ rÞbðlogðn=kÞÞ yy -Nð0; y Þ: M logðn=kÞ logðn=kÞ Now, using the above results, we can articulate our statements as follows. Statement 1 (No theoretical optimal k). Recently, Diebolt et al. (2008a) proposed to choose k to minimize the following estimated asymptotic mean squared error: 92 8P = < k k X ða aÞj logðn=jÞlogðX =X Þ 2 j n;nj þ 1 n;nj j ¼ 1 ^ AMSEðkÞ ¼ k1 y^ R;1 ðkÞ þ k1 aj : ð2:6Þ P k 2 ; : ðaj aÞ j¼1

j¼1

Now the question is whether the minimum exists. Note that the theoretical asymptotic mean square error of y^ R;1 ðkÞ is Pk r 2 1 2 1 AMSEðkÞ ¼ kp j ¼ 1 aj g . Since ffiffiffiy þ fbðlogðn=kÞÞk pffiffiffi b is a regular variation with index r, (2.3) implies that r e o 1 for any e 4 0, i.e., k ¼ Oðflogðn=kÞgr þ e Þ ¼ Oðflog ngr þ e Þ. Thus, lim supn-1 kflogðn=kÞg log k ¼ oðk1=ð2e2rÞ Þ ¼ oðlog nÞ; which implies that 8 lim logðn=kÞ=log n ¼ 1 lim log k=log n ¼ 1; > < n-1 n-1  r logðn=kÞ > : lim bðlogðn=kÞÞ=bðlogðnÞÞ ¼ lim ¼ 1: n-1 n-1 log n

ð2:7Þ

Write aj ¼ ð1logðj=kÞ=logðn=kÞÞ1 . For any t 4 0, we have r Z 1  r k k  X X logði=kÞ log x r ai Zk1 1 1 dx: 1 Zk1 t t 0 i¼1 i¼1 Taking t-1, we have lim k1

k-1

k X

r

ai

¼ 1:

ð2:8Þ

i¼1

By (2.7) and (2.8), we have 2

AMSEðkÞ ¼ fk1 y þ b2 ðlog nÞgf1 þoð1Þg: Apparently the minimum of AMSE(k) is achieved when k =n. Hence, the theoretical optimal k in terms of minimizing the asymptotic mean squared error of y^ R;1 does not exist at all. So, the method in choosing k in Diebolt et al. (2008a) is not mathematically sound. Similar thing happens for the way of choosing k in estimating high quantiles proposed in Diebolt et al. (2008b). These are not surprising since similar study exists in estimating an extreme value index g, where the case of g ¼ 0 is excluded in considering the optimal choice of sample fraction.

ARTICLE IN PRESS A.V. Asimit et al. / Journal of Statistical Planning and Inference 140 (2010) 2018–2024

2021

pffiffiffi Statement 2 (No need to reduce bias when kbðlogðn=kÞÞ-l 2 ð1; 1Þ). It follows from Results 1–4 that biased estimators ^ ^ y^ H ðkÞ and y^ R;1 ðkÞ have a faster rate pffiffiffi of convergence than the bias-reduced estimators y R;2 ðkÞ and y M ðkÞ. Hence, when one employs the same k such that kbðlogðn=kÞÞ-l 2 ð1; 1Þ, the biased estimators have a smaller order of mean squared error than the bias-reduced estimators. This is different from the study for a heavy tailed distribution. Statement 3 (Bias reduction is useful only when a large sample fraction is employed). Now let us compare the bias estimator y^ ðmÞ with the bias reduction estimator y^ ðkÞ when m and k satisfy (2.3) with la0 and (2.4) with La0, respectively. By R;1

R;2

(2.7) and (2.8), Results 2 and 3 imply that the asymptotic mean squared errors for y^ R;1 ðmÞ and y^ R;2 ðkÞ are b2 ðlog nÞf1 þ y l g 2 and b2 ðlog nÞy L2 , respectively. Hence, y^ R;2 ðkÞ has a smaller asymptotic mean squared error than y^ R;1 ðmÞ only when L2 Z l2 y2 =ðl2 þ y2 Þ. That is, when the sample fraction k in the bias-reduced estimator y^ R;2 ðkÞ is not large enough, i.e., L is not large enough, the bias-reduced estimator y^ R;2 ðkÞ has a larger asymptotic mean squared error than the biased estimator y^ ðmÞ. On the other hand, how large a sample fraction in a bias-reduced estimator should be chosen becomes practically 2 2

R;1

difficult. This is different from tail index estimation, where a bias reduction tail index estimator has a smaller order of asymptotic mean squared error than a biased one. Statement 4 (Not enough for estimating an extreme tail probability). It is known that heavy tailed distributions can be employed to estimate both high quantiles and extreme tail probabilities. Although model (1.1) has been employed to estimating high quantiles, it is doubtful that it can be used to estimate an extreme tail probability. Suppose 1FðxÞ  xa expfcx1=y g ¼ expfcx1=y þ alog xg as x-1, which satisfies (1.1). As in Diebolt et al. (2008b), estimating a high quantile for (1.1) is based on the inverse function of expfcx1=y g and estimators for c and y. However, the factor xa is not negligible in estimating an extreme tail probability, i.e., estimating 1 F(xn) where xn -1 as n-1. Therefore a more refined model than (1.1) is needed for estimating an extreme tail probability. A possible class is 1=y

1FðxÞ ¼ cxa expfdx

gf1 þ Oðxb Þg

ð2:9Þ

as x-1, where c 40; a 2 R; d 40; b 4 0 and y 40. Note that the class of the distributions satisfying (2.9) is a sub-class of Weibull tailed distributions defined in (1.1). One example which is Weibull tailed distribution but not satisfying (2.9) is 1FðxÞ ¼ expfxa ðlog xÞb g for some a; b 4 0 and large x. Since Theorem 1.2.6 of de Haan and Ferreira (2006) implies that (2.9) is in the domain of attraction of the Gumbel distribution, one may wonder how useful (2.9) is in estimating high quantiles in comparison with the way developed in extreme value theory. Statement 5 (Model (2.9) is useful in estimating very high quantiles). Let us consider estimating the high quantile xp defined by p= 1  F(xp), where p ¼ pðnÞ-0. A proposed estimator based on a Weibull tailed distribution in the literature is ^ x~ p ðkÞ ¼ Xn;nk þ 1 flogð1=pÞ=logðn=kÞgy H ðkÞ and it follows from Diebolt et al. (2008b) that     logð1=pÞ pffiffiffi x~ p ðkÞ=xp 1 ¼ O log k ð2:10Þ logðn=kÞ when ð2:9Þ holds and

pffiffiffi log log n k -l o1; log n

lim inf

logð1=pÞ

n-1 logðn=kÞ

41:

ð2:11Þ

Since (2.9) implies that F is in the domain of attraction of the Gumbel distribution, xp can be estimated by some known methods in extreme value theory; see Section 4.3 of de Haan and Ferreira (2006). Since the extreme value index is zero, we can estimate xp by   k X X k log n;ni þ 1 ; Xn;nk k1 x^ p ðkÞ ¼ Xn;nk þ log Xn;nk np i¼1 which is slightly different from the estimator for xp given in Section 4.3.1 of de Haan and Ferreira (2006). Denote the inverse function of 1/(1  F(t)) by U(t). Then (2.9) implies that ( !) log log x 1 ðlog log xÞ2 log log x 1 ðlog log xÞ2 y by1 y þa4 þa5 þO þðlog xÞ þ a2 þ a3 UðxÞ ¼ d ðlog xÞ 1þ a1 log x log x ðlog xÞ2 ðlog xÞ2 ðlog xÞ2 ðlog xÞ3 ð2:12Þ as x-1, where 2

a1 ¼ ay ;

2

a2 ¼ ylog cay log d;

a3 ¼ 

1y 2 a ; 2y 1

ARTICLE IN PRESS 2022

A.V. Asimit et al. / Journal of Statistical Planning and Inference 140 (2010) 2018–2024

a4 ¼ aya1 

a1 a2 ð1yÞ

y

a5 ¼ aya2 

and

1y 2 a : 2y 2

It follows from (2.12) that (

log log t 1 ðlog log tÞ2 log log t 1 þa4 þ a5 þa2 þa3 log t log t ðlog tÞ2 ðlog tÞ2 ðlog tÞ2 !) ! a1 a2 þ ya2 a1 ðy1Þlog log t y ðlog xÞ2 yðy1Þ ðlog log tÞ2 by1 þ þlog x þ þ þO þ ðlog tÞ log t 2 ðlog tÞ2 ðlog tÞ2 ðlog tÞ2 ðlog tÞ3

UðtxÞ ¼ dy ðlog tÞy 1 þ a1

ð2:13Þ

for any x 40 as t-1. Hence, when by 41 and ya1, UðtxÞUðtÞ log x ðlog xÞ2 aðtÞ lim ¼ t-1 AðtÞ 2 for x 4 0, where ( aðtÞ ¼ dy ðlog tÞy and AðtÞ ¼ d

y

a1 a2 þ ya2 ðlog tÞ2

þ

a1 ðy1Þlog log t ðlog tÞ2

þ

y

)

log t

y2

yðy1Þðlog tÞ

x^ p ðkÞ=xp 1 ¼ Op

=aðtÞ. Similar to the proof of Theorem 4.3.1 of de Haan and Ferreira (2006), we have ! aðn=kÞflogðk=ðnpÞÞg2 pffiffiffi ð2:14Þ xp k

when pffiffiffi pffiffiffi ð2:15Þ kAðn=kÞ-l 2 ð1; 1Þ; logðnpÞ ¼ oð kÞ; np ¼ oðkÞ: pffiffiffi pffiffiffi Note that the condition kAðn=kÞ-l 2 ð1; 1Þ in (2.15) and the formula for A(t) imply that k=logðn=kÞ pffiffiffi pffiffifficonverges to a finite number, i.e., k=log n converges to a finite number. Combining this with the condition logðnpÞ ¼ oð kÞ in (2.15), we conclude that (2.15) implies that limn-1 logðnpÞ=log n ¼ 0. It is easy to check that (2.11) implies that limn-1 flogðnpÞg=log n 4 0. Hence model (2.9) works for a much higher quantile than the standard high quantile estimation developed in extreme value theory. This is exactly what we need to cope with the extrapolation limitation of using the condition of domain of attraction of the Gumbel distribution. Is it possible to have a high quantile estimator work for the case limn-1 logðnpÞ=log n Z0? Since the high quantile estimator x~ p ðkÞ is only based on the first order in (2.12), the model approximation error becomes large when xp is small. This explains why x~ p only works for a very high quantile. It is of interest to study a high quantile estimator based on (2.12) and estimators for c; a; d; y under the setup of (2.9). We conjecture that this new high quantile estimator works when limn-1 logðnpÞ=log n Z0. If this is true, then the model (2.9) becomes more practically useful than the methods based on either (1.1) or the domain of attraction of the Gumbel distribution since one does not need to worry whether the target quantile is high enough.

Estimated Optimal AMSE

ð2:9Þ holds and

0.00090

0.00080

0.00070

940

950

960 970 Estimated Optimal k

980

^ Fig. 1. Plots of AMSEð K^ opt Þ against K^ opt for Gammað1:2; 1Þ.

990

1000

ARTICLE IN PRESS A.V. Asimit et al. / Journal of Statistical Planning and Inference 140 (2010) 2018–2024

2023

0.06

MSE

0.05

Biased estimator bias−reduced estimator

0.04

0.03

0.02

0.01 0

200

400 k=m

600

800

Fig. 2. MSEs of y^ R;1 ðmÞ and y^ R;2 ðkÞ are plotted for Gammað1:2; 1Þ.

0.04

0.04

0.03

p=1e−2 p=1e−4 p=1e−6 MSE

MSE

0.03

0.02

0.02

0.01

0.01

0.00

0.00 0

10

20

30

40

50

p=1e−2 p=1e−4 p=1e−6

0

10

20

30

40

50

Fig. 3. The first 50 smallest MSEs of x~ p ðkÞ and x^ p ðkÞ are plotted in the left and right panels, respectively, for Gammað1:2; 1Þ.

3. A simulation study Here, we perform a simulation study to support Statements 1, 2, 3 and 5. We simulate 1000 random samples of size n =1000 from the Gamma distribution with shape parameter 1.2 and scale parameter 1. ^ First, for each sample we determined the optimal value k 2 ½2; n1 such that the AMSEðkÞ in (2.6) is minimized. Let us ^ K^ opt Þ against K^ opt . This figure shows that most of the optimal values are denote this optimal value by K^ opt . Fig. 1 plots AMSEð near the sample size n= 1000, which supports Statement 1. Next, to support Statements 2 and 3, a simulation is performed in which the biased and bias-reduced estimators, y^ R;1 ðmÞ and y^ R;2 ðkÞ, are compared. In Fig. 2, we plot the mean squared errors of these two estimators against difference choices of k= m. From this figure, we observe that the MSE of y^ R;1 ðmÞ is smaller than that of y^ R;2 ðkÞ when k ¼ m r 500, which supports Statement 2. When m is around 200, one really needs a very large k to ensure that the MSE of y^ R;2 ðkÞ is smaller than that of y^ R;1 ðmÞ. This observation supports Statement 3. Finally, we calculate the MSEs of x~ p ðkÞ and x^ p ðkÞ for p = 10  2, 10  4, 10  6. The first 50 smallest MSEs of these two estimators are plotted in Fig. 3, which shows that x~ p ðkÞ works much better than x^ p ðkÞ when p becomes smaller.

ARTICLE IN PRESS 2024

A.V. Asimit et al. / Journal of Statistical Planning and Inference 140 (2010) 2018–2024

4. Conclusions Unlike tail index estimation, the theoretical optimal sample fraction in estimating the Weibull tail coefficient does not exist, and a bias reduction estimator only shows an advantage when a large sample fraction is employed. There is no theory to pffiffiffi guide the choice of a large sample fraction which still satisfies some necessary conditions such that ð k=logðn=kÞÞbðlogðn=kÞÞ-L 2 ð1; 1Þ. Therefore, it should be extremely cautious to employ any adaptive estimation and bias reduction estimation for the Weibull tail coefficient in practice due to the lack of theoretical support! Weibull tailed distributions are more useful in estimating a higher quantile than the standard high quantile estimation by assuming the condition of domain of attraction of the Gumbel distribution. The proposed refined class of Weibull tailed distributions is necessary for estimating extreme tail probabilities and may be more practical in estimating high quantiles.

Acknowledgments We thank a reviewer for his/her helpful comments. Li’s research was partially supported by NNSFC Grant 10801038. Peng’s research was supported by NSF Grant SES-0631608. References Beirlant, J., Bouquiaux, C., Werker, B.J.M., 2006. Semiparametric lower bounds for tail index estimation. J. Statist. Plann. Inference 136, 705–729. Beirlant, J., Teugels, J., Vynckier, P., 1996. Practical Analysis of Extreme Values. Leuven University Press, Leuven. de Haan, L., Ferreira, A., 2006. Extreme Value Theory, an Introduction. Springer, New York. Diebolt, J., Gardes, L., Girard, S., Guillou, A., 2008a. Bias-reduced estimators of the Weibull tail-coefficient. Test 17, 311–331. Diebolt, J., Gardes, L., Girard, S., Guillou, A., 2008b. Bias-reduced extreme quantile estimators of Weibull tail-distributions. J. Statist. Plann. Inference 138, 1389–1401. Dierckx, G., Beirlant, J., de Waal, D., Guillou, A., 2009. A new estimation method for Weibull-type tails based on the mean excess function. J. Statist. Plann. Inference 139, 1905–1920. Gardes, L., Girard, S., 2006. Comparison of Weibull tail-coefficient estimators. REVSTAT 4, 163–188. Gardes, L., Girard, S., 2008. Estimation of the Weibull tail-coefficient with linear combination of upper order statistics. J. Statist. Plann. Inference 138, 1416–1427. Girard, S., 2004. A hill type estimator of the Weibull tail-coefficient. Comm. Statist. Theory Methods 33, 205–234. Matthys, G., Beirlant, J., 2003. Estimating the extreme value index and high quantiles with exponential regression models. Statist. Sinica 13, 853–880. Mattys, G., Delafosse, E., Guillou, A., Beirlant, J., 2004. Estimating catastrophic quantile levels for heavy-tailed distributions. Insurance Math. Econom. 34, 517–537.