Accepted Manuscript Tail conditional moments for elliptical and log-elliptical distributions Zinoviy Landsman, Udi Makov, Tomer Shushi PII: DOI: Reference:
S0167-6687(15)30277-8 http://dx.doi.org/10.1016/j.insmatheco.2016.09.001 INSUMA 2267
To appear in:
Insurance: Mathematics and Economics
Received date: November 2015 Revised date: September 2016 Accepted date: 6 September 2016 Please cite this article as: Landsman, Z., Makov, U., Shushi, T., Tail conditional moments for elliptical and log-elliptical distributions. Insurance: Mathematics and Economics (2016), http://dx.doi.org/10.1016/j.insmatheco.2016.09.001 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Tail Conditional Moments for Elliptical and Log-Elliptical distributions Zinoviy Landsman Udi Makov Tomer Shushi Department of Statistics, University of Haifa, Mount Carmel, 31905, Haifa, Israel September 6, 2016 Abstract In this paper we provide the tail conditional moments for the class of elliptical distributions, which was introduced in Kelker (1970) [12] and was widely discussed in Gupta et al. (2013) [9], and for the class of log-elliptical distributions. These families of distributions include some important members such as the normal, Student-t, logistic, Laplace, and log-normal distributions. We give analytic formulae for the n-th higher order unconditional moments of elliptical distributions, which has not been provided before. We also propose novel risk measures, the tail conditional skewness and the tail conditional kurtosis, for examining the skewness and the kurtosis of the tail of loss distributions, respectively. Keywords: Elliptical distributions; Log-elliptical distributions; Tail conditional expectation; Tail conditional moments; Tail variance
1
Introduction
Let X be a random variable with a cumulative distribution function FX (x): Then, a risk measure of X is a mapping from the space of random variablesrisks R to the real line R; i.e., : R 3X =) 1
(X) 2 R:
An important risk measure is the Value at Risk (VaR), which is de…ned as a threshold value such that the probability that loss on a portfolio exceeds this value is a given probability level 1 q (Linsmeier and Pearson (2000) [21], Liu and Tse (2015) [22], and Kaplanski and Levy (2015) [11]), Pr (X > V aRq (X)) = 1
q; q 2 (0; 1):
Another important risk measure is the Tail Conditional Expectation (TCE), which is de…ned as the expected loss given that the loss exceeds the VaR, i.e. T CEq (X) := E(XjX > V aRq (X)):
(1.1)
These risk measures are widely used and studied in the insurance and risk management sectors and in the …nance literature for quantifying variety of risks (Landsman and Valdez (2003) [15], Chen et al. (2014) [3] and Yang et al. (2015) [26]). In this paper we consider a novel risk measure, the nth Tail Conditional Moment (TCM), which is de…ned as follows: T CMq (X n ) = T CMX (xq ) = E ([X
T CEq (X)]n jX > xq ) ;
(1.2)
where xq := V aRq (X): We consider this measure since we believe that these moments allow better understanding of the behavior of a risk along the tail of its distribution (X V aRq (X)): A special case of (1.2) is T CMq (X 2 ) which is equal to the tail variance (TV), a risk measure that examines the dispersion of the tail of a distribution for some quantile q (see, for instance, Landsman (2010) [18]). The TV of X is de…ned as follows: T Vq (X) := V ar (XjX > V aRq (X)) = T CMq (X 2 ): The conditional central limit theorem arises naturally for the sequence of iid conditional random variables X1 jX1 > xq ; X2 jX2 > xq ; :::; where T CEq (X) and T Vq (X) are the expectation and the variance of these random variables. Thus n X Xi jXi > xq T CEq (X) Zn;q =
i=1
p nT Vq (X) 2
d
! N (0; 1):
We note that the motivation behind the form of the proposed Tail Conditional Moment, i.e. taking E ((X T CEq (X))n jX > xq ) instead of E ((X E (X))n jX > xq ), arises from asymptotic expansion of the conditional characteristic function 'q (t)
:
= E eitX jX > xq
= 1
(1.3)
j t2 X j (it) + o(jtjk ): T CMq (X ) T Vq (X) + 2 j! j=3 k
It is well known that risk measures such as VaR, TCE and TV do not provide information on the skewness and the kurtosis of the tail of a distribution, thus, we propose new risk measures, the tail conditional skewness (TCS) and the tail conditional kurtosis (TCK), which are de…ned, respectively, as follows: E [X T CEq (X)]3 jX > xq T CSq (X) = ; (1.4) T Vq (X)3=2 and T CKq (X) =
E [X
T CEq (X)]4 jX > xq T Vq (X)2
3:
(1.5)
Notice that (1.4) and (1.5) are simply the extension of the skewness and kurtosis for the tail of the distribution for some quantile q; and that for q ! 0: The introduction of tail conditional skewness and kurtosis is motivated by the need to understand the behavior of an underlying distribution along the tail below and above T CEq (X). In fact, T CSq (X) < 0 indicates that the tail of the distribution is left-skewed, that is, the probability for X being bellows T CEq (X) is greater than that of being above it. And T CSq (X) > 0 indicates that the tail of the distribution is right-skewed which means that the probability for X being less than T CEq (X) is smaller than that of being greater than T CEq (X): In the case that we have two risks X; Y such that T CSq (X) < 0 and T CSq (Y ) > 0 we say that for the q th quantile we will prefer to take rather risk X than risk Y since the probability for an extreme loss is lower for X: For the T CKq (X); a comparison for the normal distribution, N , is useful. If T CKq (X) > T CKq (N ); one can say that the tail of the distribution is ’leptokurtic’(heavier tail and lesser risk of extreme outcomes) with respect to N along the tail. Similarly, if T CKq (X) < T CKq (N ); one can say that 3
the tail of the distribution is ’platykurtic’ (lighter tail) with respect to N along the tail. One more argument can be added in favor of T CKq (X): Actually, the insurance decision makers has no interest in the entire expectation of their loss X; but they want to know the expectation of the loss X when the loss will occur to exceed given q 100% level V aRq (X): This value one obtains calculating T CEq (X) which is quite popular and recommended by the banking laws and regulations issued by the Basel Committee on Banking Supervision, BASEL II. But it is not less important to know the interval on the tail around T CEq (X) that the risk does not exit with the large probability. Slightly modi…ed for our case the Tchebyshe¤’s inequality allows to estimate such probability, as follows: 1 : k2 It is well-known way to sharp the Tchebyshe¤’s inequality if one knows higher moments. For the our case one can write P (jX
T CEq (X)j
k T Vq (X) j X > V aRq (X)) > 1
T CKq (X) + 3 : k4 (1.6) One can see that the lower bound of the con…dent probability (right hand side of inequality (1.6)) becomes higher when T CKq (X) decreases. In the next Section we de…ne the class of elliptical distributions. In Section 3 we derive the tail conditional moments for the elliptical distributions and provide some important special cases, such as the normal and the Student-t distributions. In Section 3 we derive the conditional moments for the class of log-elliptical distributions. A conclusion is given in Section 4: P (jX
2
T CEq (X)j
k T Vq (X) j X > V aRq (X)) > 1
The class of elliptical distributions
In this section we derive the n th TCM measure of the elliptical family of distributions. Some special members of the elliptical distributions are also members the exponential family, where their n th TCM measures were given in Kim (2010) [13]. Let us introduce the elliptical family of distributions. Let X v Ek ( ; ; ) be a k variate elliptical random vector with location vector ; k k scale matrix and some function (t) called the characteristic generator. 4
If it’s characteristic function can be expressed as 1 'X (t) = exp(itT ) ( tT t): 2
(2.7)
The random vector X has a probability density function (pdf) 1 (x 2
1 fX (x) = p g (k) j j
)T
1
(x
(2.8)
) ;
where g (k) (u) ; u 0; is the density generator, which satis…es the condition, (see Furman and Landsman (2006) [8]), Z1
tk=2 1 g (k) (t)dt < 1:
(2.9)
0
We develop an idea proposed in Landsman and Valdez (2003) [15] about 1 R the cumulative generator G (u) = g (1) (t) dt: In particular we introduce a u
j
sequence of cumulative generators G (u) 1
G (u) =
Z1
g
(1)
2
(t) dt; G (u) =
u
Z1
n
1
G (t) dt; :::; G (u) =
u
Z1
n 1
G
(t) dt;
u
(2.10) associated with the elliptical family. If the variance of Z1 = X exists, i.e., 2 Z1
1 Z1 p Z p (1) 2 (1) 1 2 = 2 u g ( u )du = 2 ug (u)du < 1; 2 0
(2.11)
0
then, the function fZ1 (z) =
1 2 Z1
1 1 G ( z 2 ); 2
(2.12)
is a pdf of another spherical random variable Z1 ; called the associated random variable of Z1 (see again [15]). Furthermore, if 2 Zj
1 Z1 p Z p j 1 2 j 1 1 2 = 2 u G ( u )du = 2 uG (u)du < 1; j = 1; 2; ::: ; (2.13) 2 0
0
5
then fZj (z) =
1 2 Zj
j 1 G ( z2) 2
(2.14)
is also a pdf.
3
Tail conditional moments for the class of elliptical distributions
In the sequel we shall focus on risk measures for the univariate case of the elliptical distributions since these formulae will be also suitable for the multivariate case, where the risk of a portfolio is a weighted-sum of risks. In order to get the analytic formula of the moments we calculate the following conditional moments E (X n jX > xq ) ;
(3.15)
and provide one of the main results of the paper.
Lemma 1 Let
X v E1 ( ;
2
; g (1) ):
Then, under conditions j
G (0) =
Z1
j 1
G
(t) dt < 1; j = 1; 2; ::: ; n;
(3.16)
0
and 2 Zj
the nth ; n
< 1; j = 1; 2; :::; n;
(3.17)
2; conditional moment of X has the form
E [X n jX > xq ] = n + n n 1 T CEq (Z) n X n n k kh k 1 + zq k k=2
(3.18)
1;q
+ (k 6
1)
2 Z1 r1;q (zq )T CMZ1k
2
i (zq ) ;
where
2
1;q
1 2 z 2 q
G = 1
2 fZ1 (zq ) ; Z1
=
q
1
q
F Z1 (zq ) : 1 q
r1;q (zq ) =
Proof. The form of the nth conditional moment is 1
x
xn g (1)
xq
E [X n jX > xq ] = Using the transformation z =
1 R
x
2
dx :
F X (xq )
; we have
E [X n jX > xq ] 1 R 1 ( z + )n g (1) zq
=
1 2
1 2 z 2
dz :
F Z (zq )
Taking the expansion of the term ( z + )n , we simplify our integral into sum of the powers z k
=
E [X n jX > xq ] 1 n R k k n k (1) P n z g k k=0
1 R
g (1)
zq
=
=
zq
1
n
n
+
=
+
1 2 z 2
1 q n P n
k=1
n 1 n
1 2 z 2
dz
q
dz + k n k
k
n P
k=1
1 R
n k
1 R
1 n 1
z k g (1)
1
q
g
1 1 2 z 2
1 2 z 2
dz
q
dz
q
z 1 g (1)
zq
z
zq
zq
1 R
k k n k (1)
1 2 z 2
dz
n 1 X n + 1 q k=2 k
7
k n k
Z1 zq
z k g (1)
1 2 z dz: 2
Then, by recalling (2.10), we obtain the explicit form of E [X n jX > xq ] in terms of the TCM with lower powers than n; E [X n jX > xq ] =
= =
n
n
n
+
+n +n
n 1
n 1
n 1
n 1
n P
k=2
T CEq (Z)
T CEq (Z) + T CEq (Z) +
n P
n k
k=2
n k
k n k
1 R
1
1 2 z 2
z k 1 dG
zq
k n k
"
1
1
q 1 2 z 2 q
G
zqk
1
n k
k=2
n k k
h
zqk
1
1
1) z k 2 G
+ (k
zq
1
n X
1 R
1;q
q
+ (k
1)
1 2 z 2
2 Z1 r1;q (zq )T CMZ1k
dz
2
i (zq ) :
We now introduce the …rst of the two main results of this paper. Theorem 1 The nth TCM of X v E1 ( ; T CMq (X n ) =
i
E (X)]i jX > xq =
and 1;q
; g (1) ) takes the form
n X n ( 1)i i i=0
where = E [X
2
i
h zqi
1
n i 1;q
1;q + (i
(3.19)
i;
1)
4 Z1 r1;q (zq )T CMZ1i
G 12 zq2 = : 1 q
Proof. Using the Binomial Theorem it is clear that T CMq (X n ) = E [(X T CEq (X))n jX > xq ] = E [(X ( + 1;q ))n jX > xq ] n X n n i n i = ( 1)i E (X )i jX > xq 1;q i i=0 =
n X i=0
( 1)i
n i
n i 1;q
i;
8
2
i (zq ) ; i
#
2;
where and
i
0
)0 jX > xq = 1;
= E (X i
) jX > xq ; i
= E (X
1
) jX > xq ) =
= E ((X
2: The measures
i
1;q
are calculated in the
spirit of the proof of Lemma 1: Taking the transformation z = 1 (x ) we note that the tail function is F Z (zq ) = 1 q which is, in fact, the 1 q percentile. Furthermore, the transformation simplify the integral in i ; as follows: E (X))i jX > xq
= E (X
i
1
1 R
)i g (1)
(x
xq
=
( z)i g (1)
1 2 z 2
zq
= i
dz
F Z (zq )
1 R
1 2 z 2
z i g (1)
zq
=
dx
F X (xq )
1 R
1
2
x
1 2
1
dz :
q
Taking into account (2.10), we have i
i
i
=
1 R
1
z i 1 dG
zq
=
=
E (X))i jX > xq
:= E (X
"
1
q 1
zqi 1 G
h i zqi
1 2 z 2
1 2 z 2 q
+ (i
1;q
+ (i
1
1) z i 2 G zq
1 1
1 R
1)
q
1 2 z 2
4 Z1 r1;q (zq )T CMZ1i
dz
2
#
i (zq ) :
We can now establish several Corollaries. Corollary 1 The tail conditional expectation for the elliptical distribution can be recovered from (3.19) by taking n = 1 T CEq (X) = 9
+
1;q
;
(3.20)
which was …rst introduced in [15]. Corollary 2 The tail variance of the elliptical distribution can be derived by considering the cases n = 1 and n = 2: This risk measure is taking the form T Vq (X) = T CMq (X 2 ) = Here
2
zq
1;q
+ r1;q (zq )
2 Z1
2 1;q
:
(3.21)
2
G 12 zq2 fZ (zq ) F Z1 (zq ) = 2Z1 1 ; r1;q (zq ) = : 1;q = 1 q 1 q 1 q This measure, for the elliptical distributions, was …rst introduced in [8]. Corollary 3 The TCS of X takes the form 3 P
( 1)i
i=0
T CSq (X) =
3
zq
1;q
3 i
3 i 1;q
+ r1;q (zq )
i 2 3=2 2 Z1 1;q
:
Proof. Using (1.4) we derive …rst the third conditional moment E [X
= =
=
3 = E 3 2 Z1
E (X)]3 jX > xq
[X
(3.22) E (X)]3 jX > xq (3.23)
zq2 fZ1 (zq ) + 2F Z1 (zq ) T CEZ1 (zq ) 2 3 Z1 3 2 1 1 Z1 4 2 zq fZ1 (zq ) + 2 Z12 zG z 2 dz 5 1 q 2 zq 2 3 1 Z 3 2 1 2 Z1 4 2 zq fZ1 (zq ) 2 Z12 dG z2 5 1 q 2 1
q
zq
=
3
zq2 1;q
+2
2;q
;
2
where 2;q = G 12 zq2 =(1 q): Finally, we substitute (3.20), (3.21) and (3.23) in (1.4) the general formula of TCS for the elliptical distributions. Corollary 4 The TCK of X is given by the following formula
T CKq (X) =
4 P
( 1)i
i=0 4
zq
4 i
4 i 1;q
1;q + r1;q (zq )
10
i 2 2 2 Z1 1;q
3:
(3.24)
Proof. Using (1.4) we derive …rst the …rst conditional moment E [X by taking into account the associated elliptical random variable Z1 4
= E [X
E (X)]4 jX > xq
E (X)]4 jX > xq (3.25)
h i 3 2 z fZ (zq ) + 3F Z1 (zq ) T CMZ1 (zq ) 1 q q 1 3 2 Z1 4 2 1 2 1 Z1 4 3 zq fZ1 (zq ) + 3 Z12 z 2 G z dz 5 1 q 2 zq 2 3 1 Z 4 2 1 2 5 2 Z1 4 3 zq fZ1 (zq ) 3 Z12 zdG z : 1 q 2 4 2 Z1
= =
=
zq
Now, taking integration by parts, we simplify associated random variables Z1 and Z2 ; 2 0 4
=
=
4 2 Z1
1
4
q
4zq3 fZ1 (zq ) + 3
zq3 1;q
+ 3 zq
2;q
2 Z1
@zq G2
4
and write it is terms of the
1 2 z + 2 q
Z1
13 1 2 z dz A5 2
2
G
zq
+ r2;q (zq )F Z2 (zq )
;
where r2;q (zq ) = F Z2 (zq ) =(1 q): Now, substituting (3.20), (3.21), (3.23) and (3.25) in (1.5) the general formula of TCK for the elliptical distributions follows. Corollary 5 The nth moment of X takes the forms For E (X n ) : E (X n ) = limE (X n jX > xq )
(3.26)
q!0
=
n
=
n
n X n + lim q!0 k k=1
+
n X n k k=1
n k k
n k k 2 Z1
11
h zqk (k
1
1;q (zq )
+ (k
1) E Z1k
2
:
1)
4 Z1 r1;q (zq )T CMZ1k
2
(zq )
i
For E ((X
E (X))n ) :
en (X)
= E ((X
:
= ( 1)n = ( 1)n
n X n ( 1)i q!0 i i=0
E (X))n ) = limT CMq (X n ) = lim q!0
n 2 Z1 n 2 Z1
1) E Z1n 2 1) en 2 (Z1 ) :
(n (n
2k 2 Z1
(2k
Corollary 6 The kurtosis of X v E1 ( ;
1) e2(k 2
2 Z2 2 Z1
=3
1)
(Z1 ) :
(3.28)
; g (1) ) takes the form
1 :
Proof. The kurtosis of X can be found immediately by using (3.28) and by substituting the fourth and second moments around the mean into :=
3.1
E (X
)4
E (X
)2
2
3:
TCM for weighted-sum elliptical distributions
Consider an n 1 random vector X = (X1 ; X2 ; :::; Xn )T with an elliptical distribution X v En ( ; ; g (n) ): Then, using Landsman and Valdez (2003) [15], the distribution of the return R = xT X; where x 2Rn ; is as follows R = xT X v E 1 (
R
= xT ;
2 R
= xT x; g (1) ):
(3.29)
Now, using (3.29) and by (3.20),(3.21), (3.22) and (3.24) we obtain the TCE, TV, TCS and TCK, respectively: p R xT x 2 R T CEq (R) = xT + ; Z1 fZ1 (zq;r ) ; zq;r = V aRq 1 q R 12
i
(3.27)
Therefore, for an even n = 2k moment we conclude that e2k (X) =
n i 1;q
T Vq (R) = xT x zq;r
T CSq (R) =
3 P
3.2
( 1)i
(xT x)3=2 zq;1 4 P
n i
n i q
i=0 1;q
n i
n i q
;
i
1;q + r1;q (zq;1 )
( 1)i
(xT x)2 zq;r
2 4 Z1 1;q
+ r1;q (zq;r )
i=0
and T CKq (R) =
2 Z1 1;q
2 3=2 2 Z1 1;q
;
i
+ r1;q (zq;r )
2 2 2 Z1 1;q
3:
Examples of risks associated with particular members of the elliptical family
We now discuss risk measures related to several distributions. 3.2.1
Normal distribution
Let X v N ( ; 2 ) then using [15] 2Z1 = 1: In this case we notice that since the probability density function (pdf) is given by fX (x) =
1
x
;
we have j 1 G ( t2 ) = (t); j = 1; 2; 3; :::; 2
thus it is clear that 2 Zj
= 1; j = 2; 3; :::;
and therefore every random variable Zj ; j = 1; 2; :::; has a standard normal distribution, i.e. d d d Z1 = Z2 = ::: = Zn v N (0; 1); so we have density functions fZj (zq ) = (zq ); j = 1; 2; 3; ::: : Now, using (3.20) the TCE of X has the form 13
(3.30)
T CE (X) =
+
(zq ) ; 1 q
which coincides with the TCE for normal distribution given in Landsman and Valdez (2003) [15]. Further, substituting (3.30) in (3.21), (3.22) and (3.24), for the risk measures T Vq (X) ; T CSq (X) and T CKq (X); we get their respective explicit formulae: 2 (zq )2 T Vq (X) = zq (zq ) + (zq ) ; 1 q 1 q
T CSq (X) = 3
and
T CKq (X) = 4
where
h
h
( 1)i
i=0 1
1 q
4 P
( 1)i
1
1 q
=
4 i
zq (zq ) + 2
zq
and 4
=
4
zq3
i (zq )2 1 q
(zq )
(zq ) 1 q
i3=2 ;
4 i i
(zq )
(zq )2 1 q
i2
3;
(zq ) +1 ; 1 q
E (X)]3 jX > xq =
= E [X
3 i
(zq ) 1 q
3 i
zq (zq ) +
i=0
2
3
3 P
3
zq2
(zq ) (zq ) +2 ; 1 q 1 q
(zq ) (zq ) + 3 zq +1 1 q 1 q
:
We note the TV given here coincides with the one given in Furman and Landsman (2006) [8].
14
Figure 1: TCS measure of normal distribution for di¤erent values of quantile q, with = 1=4
Figure 2: TCK measure of normal distribution for di¤erent values of quantile q, with = 1=4
As we can see in Figure 1, the TCS measure is increasing with the value of the quantile q, while for q ! 0 the TCS is exactly the skewness which is equal to zero. Figure 2 provides an illustration of the TCK measure. Notice that this measure is getting larger with the value of the quantile q, while in q ! 0 TCK is exactly the kurtosis which is equal to zero. d d d From (3.19) and from the fact that Z1 = Z2 = ::: = Zn v N (0; 1); we get a recursion formula for the TCM of the normal distribution
T CMq (X n ) =
n X
( 1)i
i=0
n i
n i 1;q
i
zqi
1
1;q
+ (i
1) T CMZ i
2
(zq ) ; (3.31)
n
and for T CMq (Z ) the recursion is n
T CMq (Z ) =
n X i=0
( 1)i
n i
n i 1;q
zqi
15
1
1;q
+ (i
1) T CMZ i
2
(zq ) :
Letting q ! 0 we obtain from (3.31) the unconditional moment (3.28) for X is e2k (X) = 2k (2k 1) e2(k 1) (Z) :
3.2.2
Generalized Student-t distribution (GST) distribution
Bian and Tiku (1997) [1] and MacDonald (1996) [23] suggest putting kp = 2p 3 ; where p is the power parameter, p > 32 ; for establishing the so called 2 GST univariate distribution with density p
(x )2 1+ 2kp 2 1=2)
1 fX (x) = p 2kp B(1=2; p
;
where B( ; ) is the beta function. Using this parametrization we have 2Z1 = 1: In the case that 21 < p 23 we have kp = 12 and the variance does not exist. Using [8] we have 8 q q 1 < t( 2p1 3 z : p 1); 32 < p 52 2p 3 q q fZ1 (z) = : 2p 5 2p 5 5 : t( z : p 1); p > 2p 3 2p 3 2 The cumulative generators of X are expressed as follows
1 G (u) = p 2kp B(1=2; p 1
=
p
1=2)
1+
u kp
p
dx
u
1 kp 2kp B(1=2;p 1=2)
p
Z1
1
1+
u kp
1 p
;
and 2
G (u) =
p
1 2B(1=2;p 1=2)
p
1
p Z1 kp
u 1+ kp
1 p
dx
u
=
p
1 k 1:5 2B(1=2;p 1=2) p
(p
1) (p
2) 16
u 1+ kp
2 p
:
(3.32)
In general, we have n 1=2
n p
cp kp G (u) = (p 1)!
u 1+ ; kp 1 = p : 2kp B(1=2; p 1=2)
n
cp
For p > 5=2; using (3.20), the TCE of X is T CEq (X) =
cp k p p 1
+
1+ 1
1 2 z 2 q kp
1 p
(3.33)
:
q
Now, consider p > 7=2: Then by (3.21) ! 1 2 1 p z cp k p q T Vq (X) = [zq 1+ 2 1 q p 1 kp 0 ! 12 1 2 1 p zq cp k p A ]: 1+ 2 + 2Z1 F Z1 (zq ) @ p 1 kp 2
(3.34)
For p > 9=2 we derive the TCS simply by substituting (3.33), (3.34), 1 2 z
1 p
(3.32) and 2Z1 fZ1 (zq ) = cpp k1p 1 + 2kpq into (3.22). The TCK is derive for p > 11=2 by substituting the above formulas into (3.24). Using eq. (3.28) we can calculate the closed formula for even unconditional moments in the case of GST. If fact, straightforwardly we get that 2 Zi
=
kp ; i = 1; 2; :::; n: kp i
From the fact that every density function fZi (z) is a GST density with p degrees of freedom with scale parameter kp =kp i we conclude that e2k (X) =
2k
kpk ; k < 2p 1)!! k Q kp i
(2k
i=1
In the case of GST we have kp = (2p e2k (X) =
2k
p
3) =2;
3 2 17
k
k Y i=1
2i p i
1 : 3=2
1:
i
For the case of classical Student-t distribution p = ( + 1)=2 and kp = =2 where is number of degrees of freedom. Also we note that k =2+1=2 = =2; and kp i = k( +1)=2 i = k( 2i)=2+1=2 = ( 2i) =2: Then e2k (X) =
=
2k
(2k
1)!!
k Y i=1
2k k
k Y
2i
( =2)k (( 2i) =2)
1 ; 2i
i=1
this well conforms with (Kotz and Nadarajah (2004) [14] sec. 1.7).
3.2.3
Logistic distribution
Suppose X has logistic distribution, X v Lo( ; 2 ); with density generator ce u 0: Then the pdf of X is as follows: g (1) (u) = (1+e u )2 ; u
It is clear that
1 fX (x) = h 2
e 1+e
1 G (u) = 2 1
Z1
1 2
2
(x ) 1 2
2
(x )
i2 :
e x dx (1 + e x )2
(3.35)
u
1 1 = ; 2 1 + eu and further, 1 G (u) = 2 2
Z1
1 dx 1 + ex
u
1 = [x ln (ex + 1)] j1 u 2 1 = [ln (eu + 1) u] ; 2 18
(3.36)
and
1 G (u) = 2 3
Z1
ln (ex + 1)
(3.37)
xdx
u
x2 1 1 Li2 ( ex ) j = 2 2 u u2 1 Li2 ( eu ) + : = 2 2 Using (3.35) and (3.20), the TCE of X has the form T CEq (X) =
+
1 21
1 q 1 + e 12 zq2
(3.38)
:
Now, using (3.35), (3.36) and (3.21), the TV takes an explicit form " # 2 2 zq 1 1 1 T Vq (X) = + 2Z1 F Z1 (zq ) : 1 q 2 1 + e 12 zq2 2 1 + e 12 zq2
(3.39)
Notice that the measures TCS and TCK can be simply obtain by substitutingi h 1 2 1 1 2 1 1 zq 2 2 +1 z (3.38), (3.39) and fZ1 (zq ) = 2 1 z2 ; Z1 F Z1 (zq ) = 2 ln e 2 q q 1+e 2
into (3.22) and (3.24), respectively. 3.2.4
Laplace distribution
In the case that X has Laplace distribution, X v La( ; takes the form ( 1 x e ; x <0 2 fX (x) = : x 1 e ; x 0 2 j
2
); the pdf of X
Therefore we obtain the cumulative generators p 1=2 G (u) ; j = 1; 2; :::; n through 1 (1) it’s density generator g (u) = 2 expf 2u g; u 0; 1
G (u) =
Z1
xe x dx = e
u
19
u
(1 + u) ;
and 2
G (u) =
Z1
x
e
u
(1 + x) dx = e
(2 + u) :
u
j
Generally, the cumulative generators G (u) ; j = 1; 2; :::; n are as follows n
G (u) =
Z1
x
e
(n
1 + x) dx = e
u
(n + u) :
(3.40)
u
So, the nth TCM of Z =
x
E ((Z
is expressed by e
n
) jZ > zq ) =
1 2 z 2 q
n + 12 zq2 ; 1 q
and using (3.20), the TCE of X is T CEq (X) = Further, using (3.21) we obtain " 2
T Vq (X) =
1
q
zq e
1 2 z 2 q
+
1
1 1 + zq2 2
q
+
e
1 2 z 2 q
2 Z1 F Z1
1 1 + zq2 : 2
(zq )
e
1 2 z 2 q
1 1 + zq2 2
2
#
The measures TCS and TCK follow immediately by substituting the above formula of TV in the formulas of TCS (3.22) and TCK (3.24), respectively.
4
Tail conditional moments for the class of log-elliptical distributions
The family of log-elliptical distributions is a family of distributions that has right heavy tail. Naturally, this family of distributions is important when dealing with risks, since it is well known that risks are commonly have nonsymmetric distributions (see Landsman et al. (2013) [20] and Yang (2015) [26]). The most famous and widely used member of this family is the lognormal distribution (see, for instance, Moy (2015) [24]). This family was well 20
:
studied in the actuarial literature, for example, the TCE of the log-elliptical distributions was given in Dhaene et al. [5], and the TV measure was derived in Landsman et al. (2013) [20]. Furthermore, in this section we consider the conditional moments E(Y n jY > yq ) instead of the TCM since these moments take a simpler form for this family. The class of log-elliptical distributions is de…ned by considering a transformation of the random vector X vEn ( ; ; g (n) ); such that Y = (Y1 ; Y2 ; :::; Yk )T = T eX1 ; eX2 ; :::; eXk : Then, we say that Y has log-elliptical distribution with a pdf 1 1 fY (y) = p g (k) (ln y )T 1 (ln y ) : 2 j j In this section we derive the nth -TCM for the log-elliptical family of distributions. Let Y v LE1 ( ; 2 ; g (1) ); and =
ln Y
v E1 (0; 1; g (1) ):
Lemma 2 Under conditions (3.16) and (3.17), the nth conditional moment of Y is given by E(Y n jY > yq ) = en
1 ( n2 2
2
)
G 1
q
(4.41)
;
q
where is the characteristic generator of ; ( 12 n2 2 ) is …nite, and G is the n t (1) 1 2 right tail cdf of a random variable 2 R with pdf f (t) = (2e t : 1 2 2 g 2 n ) 2
Proof. The conditional moment (3.15) of the log-elliptical distribution is
n
E(Y jY
> yq ) =
=
R1
yn
R1
yn
yq
1 1 (1) g y
1 2
2
ln y
F Y (yq )
1 1 (1)
g
yq
1 2
ln y
F Y (yq ) 21
2
dy :
dy
After substituting
=
E(Y n jY
ln y
we get
> yq ) =
=
R1
Z
e(n
1) ( +
1 2 2
) (1)
g
e
+
d
q
+
F ( q) 1 2 2
d
1 2 2
d
) (1)
g
q
= en Once we let
en(
R1
en g (1)
1 R1
q
en g (1)
q
1 2
1
2
:
q
1 d = ( (n )2 )G ( ) ; 2
1
th
the n
conditional moment is, therefore, expressed by E(Y n jY > yq ) = en
1 ( n2 2
2
)
G 1
q
q
:
Remark 1 In the case n = 1; the nth conditional moment of the log-elliptical distribution (4.41) reduces to the TCE of log-elliptical distributions with the form T CEq (Y ) = E(Y jY > yq ) = e f (t) =
e t (1) 1 2 g (2 )
(
1 2
2
)
G 1
q
q
;
(4.42)
1 2 t : 2
This case was introduced in Dhaene et al. (2008) [5], and further discussed in Landsman, Pat and Dhaene (2013) [20].
22
Let us now provide the explicit form for the nth TCM of the log-elliptical distribution. Theorem 2 Under conditions (3.16) and (3.17), the nth TCM of Y is given by n
T CMq (Y ) =
n X ( 1)i
n i
i=0
G (1
1
n i G q n i+1 q)
q
i
(
Here G i is the right tail cdf of a random variable 2ei t g (1) 12 t2 : ( 1 i2 2 )
1 2 i
2 n i+1
)
e n:
(4.43)
2 R with pdf f i (t) =
2
Proof. Using the Binomial theorem we have, straightforwardly, T CMq (Y n ) = E([Y T CEq (Y )]n jY > yq ) = E([Y T CEq (Y )]n jY (4.44) > yq ) n X n = ( 1)i T CEq (Y )n i E Y i jY > yq i i=0 using (4.41), (4.42), and after some algebraic calculations, we get n
T CMq (Y ) =
n X
( 1)i
i=0
n X = ( 1)i
=
i=0 n X i=0
( 1)i
n T CEq (Y )n i E Y i jY > yq i !n i 1 2 G1 q n ) e ( 2 1 q i n i
G (1
1
n i G q n i+1 q)
i
q
(
1 2
ei
2 n i+1
)
1 ( i2 2
Gi q 2 ) 1 q
e n:
Remark 2 The previous Theorem allows us to derive, explicitly, the TCS and TCK measures, which are, respectively, T CSq (Y ) = T CMq (Y 3 )=T CMq Y 2 and T CKq (Y ) = T CMq (Y 4 )=T CMq Y 2 23
3=2
2
;
(4.45)
3:
(4.46)
!
Example 1 Log-normal distribution: Suppose Y has log-normal distribution Y v LN1 ( ; 2 ); then its pdf takes the following form fY (y) =
1 y
ln y
(4.47)
:
Before we provide the pdf of ; f (t); we need to consider the characteristic generator (u) = eu ; u 0; of v N1 (0; 1) at the point 12 n2 2 of (4.47). Clearly, 2 1 1 ( n2 2 ) = e 2 (n ) ; (4.48) 2 and the pdf of is given by f (t) = en =
1 p e 2 n );
1 (n 2
t
(t
)2
1 2 t 2
(4.49)
and G ( q ) then takes the form G ( q) =
Z1
(z
n )dz = (
q
n ):
q
Then, for (4.41) the nth conditional moment takes the form E(Y n jY > yq ) = en
+ 12 n2
2
(
q
1
n ) : q
(4.50)
;
(4.51)
In the case n = 1 we have E(Y jY > yq ) = e
+ 12
2
q
1
q
which is the TCE for the log-normal distribution. Notice that (4.51) was given in Dhaene et al. (2008) [5]. The nth TCM can be expressed explicitly, using (4.43) and (4.50). Example 2 Log-Laplace distribution:
24
Suppose Y has log-Laplace distribution Y v LLa1 ( ; 2 ): Then it’s characteristic generator is given by (see, for instance, Landsman, Pat and Dhaene (2013) [20]) 1 : (u) = 1+u Furthermore, for
1 2 2 n 2
; 1 2 n 2
2
1
=
1
v La1 (0; 1) is
The pdf of a Laplace random variable 1 f (t) = p e 2 In this case
q
(4.52)
1 2 2: n 2
p
2jtj
; t 2 R:
(4.53)
is given by
q
=
p1 ln (2q) ; 2 p1 ln (2(1 2
0 < q < 1=2 q)) ; 0
q<1
:
(4.54)
Thus, taking into account (4.52), (4.53), and (4.54), the nth conditional moment takes the following form E(Y n jY > yq ) =
en 1 12 n2
2
where G is the right tail of a random variable f (t) =
1
G 1
2
e
q
;
(4.55)
2 R with the pdf
1 2 2 p n n t 2jtj 2
p
q
:
Using (4.43) and (4.55), the nth TCM can be explicitly computed.
4.1
Conditional Expectation of log-elliptical portfolio
Since the TCE is a natural tool for examining the contribution of a certain risk to the total capital requirement (see, for instance, Landsman and Valdez (2003) [15] and Furman and Landsman (2005) [7]), we derive the case of the log-elliptical family of distributions by using Proposition 1 in [7]. 25
Theorem 3 Let Y = (Y1 ; Y2 ; :::; Yk )T be a multivariate portfolio where Yj v (1) LE1 ( j ; 2j ; gj ); j = 1; 2; :::; k; and is mutually independent of Yi for any i = 1; :::; k; i 6= j: Then E (Yj jS > sq ) = e
j(
j
1 2
2 j)
GS
Yj +Yj
1
(sq )
q
:
Here S = Y1 + Y2 + ::: + Yk : Yj is a non-negative random variable with the probability density function fYj (t) = E (Yj )
1
fYj (t) t; t
0;
is the characteristic generator of the symmetric random variable ln Yj ; and GS Yj +Yj (sq ) is the right tail cumulative function of S Yj + Yj at the point sq = V aRq (S): j
Proof. First, let us derive the expectation of Yj by considering (4.41) for q!0 E (Yj ) = limT CEq (Yj ) = lime q!0
= e
j
j
q!0
1 j( 2
j(
1 2
2 j)
G
V aRq 1 q
j
2 j ):
Now by [7], for any non negative random variable, E (Yj jS > sq ) = E (Yj )
GS
Yj +Yj
1
(sq )
q
:
Therefore, the contribution of Yj to the total capital requirement S is expressed as follows E (Yj jS > sq ) = e
j(
j
1 2
2 j)
GS
Yj +Yj
1
(sq )
q
:
Example: Suppose Y = (Y1 ; Y2 ; :::; Yk )T has a k-variate log-normal distribution, Y v LN1 ( ; ; ); where is k 1 vector of means, is a k k scale matrix. Then, by using (4.48), it is clear that for any Yj E (Yj jS > sq ) = e
2 j j+ 2
26
GS
Yj +Yj
1
q
(sq )
:
5
Conclusion
In the paper we have developed an explicit form of the tail conditional moments for the class of elliptical distributions n X n T CMq (X ) = ( 1)i i i=0
n i 1;q
n
i;
where n
= E ([X
E (X)]n jX > xq ) =
n
h zqn
1
1;q
+ (n
4 Z1 r1;q (zq )T CMZ1n
1)
2
Furthermore, we derived the tail conditional moments for the class of logelliptical distributions E(Y n jY > yq ) = en
1 ( n2 2
2
)
G 1
q
q
;
and provided the conditional expectation of the log-elliptical portfolio. The relevance of the class of elliptical distributions arises in cases where risks are assumed to follow symmetric distributions, while the relevance of the class of log-elliptical distributions arises from the other cases in which the distribution assumed to be positively skewed. The results of this paper generalize the results in Landsman and Valdez (2003) [15], Furman and Landsman (2006) [8] and Gupta et al. (2013) [9]. Acknowledgement 4 We thank the anonymous referees for their very useful comments.
References [1] Bian, Guorui, and Moti L. Tiku. (1997). Bayesian inference based on robust priors and MML estimators: Part I, symmetric location-scale distributions. Statistics: A Journal of Theoretical and Applied Statistics, 29: 317-345. [2] Cascos, I., and Molchanov, I. (2013). Multivariate risk measures: a constructive approach based on selections. arXiv preprint arXiv: 1301.1496. 27
i (zq ) ; n
2:
[3] Chen, B., Hsu, W. W., Ho, J. M., and Kao, M. Y. (2014). Linear-time accurate lattice algorithms for tail conditional expectation. Algorithmic Finance, 3: 1-2. [4] Chernozhukov, V., and Umantsev, L. (2001). Conditional value-at-risk: Aspects of modeling and estimation. Empirical Economics, 26: 271-292. [5] Dhaene, J., Henrard, L., Lansdman, Z., Vandendorpe, A., Vandu¤el, S. (2008). Some results on the CTE based capital allocation rule, Insurance: Mathematics and Economics, 42: 855-863. [6] Di Bernardino, E., and Prieur, C. (2014). Estimation of multivariate conditional-tail-expectation using Kendall’s process. Journal of Nonparametric Statistics, 26: 241-267. [7] Furman, E., and Landsman Z. (2005). Risk capital decomposition for a multivariate dependent gamma portfolio. Insurance: Mathematics and Economics, 37.3: 635-649. [8] Furman, E. and Landsman, Z. (2006). Tail Variance Premium with Applications for Elliptical Portfolio of Risks, Astin Bulletin, 36: 433-462. [9] Gupta, A. K., Varga, T., and Bodnar, T. (2013). Elliptically contoured models in statistics and portfolio theory. New York: Springer. [10] Franke, J., Hardle, W. K., and Hafner, C. M. (2015). Value-at-Risk and Backtesting in Statistics of Financial Markets. Springer Berlin Heidelberg, 359-372 [11] Kaplanski, G., and Levy, H. (2015). Value-at-risk capital requirement regulation, risk taking and asset allocation: a mean–variance analysis. The European Journal of Finance, 21: 215-241. [12] Kelker, D. (1970). Distribution theory of spherical distributions and location-scale parameter generalization. Sankhya, 32: 831-869. [13] Kim, J. H. (2010). Conditional tail moments of the exponential family and its related distributions. North American Actuarial Journal, 14, 198-216. [14] Kotz, S., and Nadarajah, S. (2004). Multivariate t-distributions and their applications. Cambridge University Press. 28
[15] Landsman, Z. M., and Valdez E. A. (2003). Tail Conditional Expectations for Elliptical Distributions. North American Actuarial Journal, 7: 55-71. [16] Landsman, Z. (2006). On the generalization of Stein’s Lemma for elliptical class of distributions. Statistics & Probability Letters, 76, 1012-1016. [17] Landsman, Z., and Neslehova, J. (2008). Stein’s Lemma for elliptical random vectors. Journal of Multivariate Analysis, 99, 912-927. [18] Landsman, Z. (2010). On the tail mean–variance optimal portfolio selection. Insurance: Mathematics and Economics, 46, 547-553. [19] Landsman, Z., and Makov U. (2011). Translation-invariant and positivehomogeneous risk measures and optimal portfolio management. The European Journal of Finance 17, 307-320. [20] Landsman, Z. M., Pat N., Dhaene J. (2013). Tail Variance Premiums for Log-Elliptical Distributions, Insurance: Mathematics & Economics. 52, 441–447. [21] Linsmeier, T. J., and Pearson, N. D. (2000). Value at risk. Financial Analysts Journal, 56, 47-67. [22] Liu, S., and Tse, Y. K. (2015). Intraday value-at-risk: An asymmetric autoregressive conditional duration approach. Journal of Econometrics. [23] McDonald, James B. (1996). 14 Probability distributions for …nancial models. Handbook of Statistics, 14, 427-461. [24] Moy, R. L., Chen, L. S., & Kao, L. J. (2015). Normal and Lognormal Distributions. In Study Guide for Statistics for Business and Financial Economics, 83-97. [25] Shaw, R. A., Smith, A. D., and Spivak, G. S. (2011). Measurement and modelling of dependencies in economic capital. British Actuarial Journal, 16, 601. [26] Yang, Y., Ignataviciute, E., and Siaulys, J. (2015). Conditional tail expectation of randomly weighted sums with heavy-tailed distributions. Statistics & Probability Letters, 105, 20-28.
29