International Review of Financial Analysis 22 (2012) 10–17
Contents lists available at SciVerse ScienceDirect
International Review of Financial Analysis
Econometric modeling and value-at-risk using the Pearson type-IV distribution☆ S. Stavroyiannis a,⁎, I. Makris a, V. Nikolaidis a, L. Zarangas b a b
Department of Finance and Auditing, Technological Educational Institute of Kalamata, Antikalamos 241 00, Greece Department of Finance and Auditing, Technological Educational Institute of Epirus, Preveza 481 00, Greece
a r t i c l e
i n f o
Article history: Received 30 July 2011 Received in revised form 15 January 2012 Accepted 22 February 2012 Available online 3 March 2012 JEL Classification: C01 C46 C5 Keywords: Financial markets Value-at-risk GARCH model Pearson type-IV distribution
a b s t r a c t The recent financial crisis of 2007–2009 has challenged the requirements of Basel II agreement on capital adequacy as well as, the appropriateness of value-at-risk (VaR) measurement for properly “back-tested” and “stress-tested” models. This paper reconsiders the use of VaR as a measure for potential risk of economic losses in financial markets. We incorporate a GARCH model where the innovation process follows the Pearson-IV distribution, and the results are compared with the skewed Student-t distribution, in the sense of Fernandez and Steel. As case studies we consider the major historical indices of daily returns, DJIA, NASDAQ Composite, FTSE100, CAC40, DAX, and S&P500. VaR and backtesting are performed by the success–failure ratio, the Kupiec LR test, the Christoffersen independence and conditional coverage tests, the expected shortfall with ESF1 and ESF2 measures, and the dynamic quantile test of Engle and Manganelli. The main findings indicate that the Pearson type-IV distribution gives better results, compared with the skewed student distribution, especially at the high confidence levels, providing a very good candidate as an alternative distributional scheme. © 2012 Elsevier Inc. All rights reserved.
1. Introduction Risk is at the core of the business of banking and finance and it is the basic ingredient for generating profit in any market associated activity (Stambaugh, 1996). The enormous growth of trading that was observed during the recent years, in both the developed but mainly in the emerging markets, has led the financial regulators and supervisory committees to seek well-justified methods to quantify risk techniques, which can be used for the evaluation of the potential loss that financial institutions can suffer. This need was further reinforced by a number of financial crises that took place in the 1980, 1990 and the 2000 decades, such as the worldwide stock markets collapse in 1987, the Mexican crisis in 1995, the Asian and Russian financial crises in 1997–1998, the Orange County default, the Barings Bank, the dot.com bubble and Long Term Capital Management bankruptcy worldwide, with the Lehman Brothers being the most notable case. Such financial uncertainty has increased the likelihood of financial institutions to suffer substantial losses, as a result of their exposure to unpredictable market changes. In an attempt to remedy some of the problems created since the implementation of Basel I
☆ An early version of this paper was presented at the Business & Economics Society International at Split, Croatia, 6–9 July, 2011. The authors would like to thank the discussant Svatopluk Kapounek, the chairman of the session Pedro Godinho, David E. Allen, Demetri Kanatarelis, and the conference participants for many helpful comments and discussions. ⁎ Corresponding author. Tel.: + 30 2721045158; fax: + 30 2721045151. E-mail addresses:
[email protected],
[email protected] (S. Stavroyiannis). 1057-5219/$ – see front matter © 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.irfa.2012.02.003
agreement, Basel II (Basle Committee of Banking Supervision, 1996, 2006) was introduced in the 1990s and it was put in full implementation in 2007. Under this framework banks were given an incentive to transfer risky assets off their balance sheet. Regulation arbitrage was further incurred since Basel I made possible for banks to treat assets that were insured as government securities with zero risk, a feature that was fully exploited by the banks and led to the huge increase of the market of the credit default swaps (CDS). A central feature of the modified Basel II Accord was to allow banks to develop and use their own internal risk management models conditional upon that these models were tested under extreme circumstances, and properly “back-tested” and “stresstested”. During the last decade several approaches in estimating the profit and losses distribution of portfolio returns have been developed, and a substantial literature of empirical applications has emerged, providing an overall support for the use of value-at-risk (VaR) as an appropriate measure of market risk (Jorion, 2000). The concept of VaR (Morgan J.P./Reuters, 1996) is an accepted methodology for quantifying risk, and its adoption by bank regulators is part of the evolution of risk management indicating the importance of adequately measuring risk, both in terms of probability and financial consequence, as a response to the increased volatility of the financial markets. In general VaR is defined as the worst loss for a given confidence level, and from the probabilistic point of view the critical issue in VaR is the precise prediction of the tail probability of an asset, since the extreme movements in the tail provide information on the data generation process. A number of these models have focused on the computation of VaR on the left tail of the
S. Stavroyiannis et al. / International Review of Financial Analysis 22 (2012) 10–17
probability density function (pdf) corresponding to negative returns. This implies that portfolio managers or traders hold long trading positions meaning that they have bought an asset and they are concerned with the case that the price of this asset falls, resulting in losses. More recent approaches dealt with modeling VaR for portfolios that includes both long and short positions, where in the later case a loss occurs if the asset price increases. A stylized fact that is well documented in the literature is that stock returns for mature and emerging stock markets behave as martingale processes with leptokurtic distributions (Fama, 1965; Mandelbrot, 1963) and conditionally heteroskedastic errors (Fielitz, 1971; Mandelbrot, 1967). According to De Grauwe (2009) the Basel Accords have failed to provide stability to the banking sector since the risks linked with universal banks are tail risks associated with bubbles and crises. Fat tails are linked to the occurrence of bubbles and crises implying that models associated with normally distributed errors substantially underestimate the probability of large shocks. Although there is a variety of empirical models to account for the stylized facts like GARCH (Bollerslev, 1986), IGARCH (Engle & Bollerslev, 1986), EGARCH (Nelson, 1991), TARCH (Glosten, Jagannathan, & Runkle, 1993), APARCH (Ding, Granger, & Engle, 1993), FIGARCH (Baillie, Bollerslev, & Mikkelsen, 1996), FIGARCHC (Chung, 1999), FIEGARCH (Bollerslev & Mikkelsen, 1996), FIAPARCH (Tse, 1998), FIAPARCHC (Chung, 1999), and HYGARCH (Davidson, 2004), there is few work regarding the pdf schemes. These include the standard normal distribution (Engle, 1982), which does not account for fat-tails and is symmetric, the Student-t distribution (Bollerslev, 1987), which is fattailed but symmetric, and the generalized error distribution (GED), which is more flexible than the Student-t including both fat and thick tails, introduced by Subbotin (1923) and applied by Nelson (1991). However, taking in account that in the VaR framework both the long and short positions should be considered, Giot and Laurent (2003) have shown that models which rely on symmetric density distribution for the error term underperform, due to the fact that the pdf of asset returns is non-symmetric. Therefore, in such a case the VaR for portfolio managers or traders who hold both long (left tail) and short (right tail) positions cannot be accurately measured, and the use of the skewed Student-t distribution in the sense of Fernandez and Steel (1998), has been implemented (Lambert & Laurent, 2000). The aim of this paper is to reconsider the value-at-risk where the volatility clustering and returns are modeled via a typical GARCH(1,1) model and the innovation process follows the Pearson type-IV distribution. The model and the distribution are fitted to the data via maximization of the logarithm of the maximum likelihood estimator (mle). As case studies we consider the major historical indices, Dow Jones Industrial Average (DJIA), National Association of Securities Dealers Automated Quotations (NASDAQ) composite, FTSE100, Cotation Assistée en Continu (CAC) 40, Deutscher Aktien IndeX (DAX) and Standard & Poor (S&P) 500. In sample VaR backtesting is performed by the success–failure ratio, the Kupiec Likelihood-ratio (LR) test, the Christoffersen conditional coverage test, the expected shortfall, and the dynamic quantile test of Engle and Manganelli.1 The remainder of the paper is organized as follows. Section 2 reviews the Pearson type-IV distribution and addresses several computational issues. In Section 3 we present the financial markets data used and the econometric methodology followed. Section 4 reports on the VaR analysis and Section 5 provides the concluding remarks.
1 The numerical computation for the optimization, the success–failure ratio, the Christoffersen independence and conditional coverage tests, have been performed with source code with the Matlab® computing language; the expected shortfall measures and the DQ-test by Engle–Manganelli computation have been performed with source code in the OxMetrics® programming environment using the native routines. The aforementioned tests with the skewed Student-t distribution have been performed with OxMetrics® software (Doornik, 2009; Laurent, 2009; Laurent & Peters, 2002).
11
2. The Pearson type-IV distribution and computational issues The Pearson system (Pearson, 1893, 1895, 1901, 1916) is a generalization of the differential equation that has as a solution the Gaussian distribution, to the differential equation: f ′ ðxÞ xþa ¼− f ðxÞ b0 þ b1 x þ b2 x2
ð1Þ
with the solution: 0 13 2 2 −1=ð2b2 Þ −1 B b1 þ 2b2 x C7 6 2ðb1 −2ab2 Þ f ðxÞ ¼ b0 þ b1 x þ b2 x exp4qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi tan @qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA5: ð2Þ 4b0 b2 −b21 4b0 b2 −b21
Depending on the values of the bi coefficients and the discriminant which appears in the square roots of Eq. (2), the Pearson system provides most of the known distributions like, the Gaussian (Pearson type-0), the Beta (Pearson type-I), the Gamma (Pearson type-III), the Student-t (Pearson type-VII), etc. If the discriminant of the denominator in Eq. (1) is negative therefore, the square roots in Eq. (2) are real, by rearranging the terms we conclude on the Pearson type-IV distribution in its recent form (Nagahara, 1999): −m x−λ 2 −1 x−λ f ðxÞ ¼ kðm; ν; aÞ 1 þ : exp −ν tan a a
ð3Þ
In Eq. (3) the parameters are described as follows; a>0 is the scale parameter, λ is the location parameter, m>1/2 controls the kurtosis, ν the asymmetry of the distribution, and k(mν,a,λ) is the normalization constant. The distribution is negatively skewed for ν>0 and positively skewed for νb 0 while for ν=0 reduces to the Student's t-distribution (Pearson type-VII) with ν=2m−1 degrees of freedom. The Pearson system has been used for several decades to fit into the pdfs with fat tails (Magdalinos & Mitsopoulos, 2007) or other scientific multi spectra (Stavroyiannis, 2001, 2002). Using the method of moments, the moments of the distribution can be calculated without knowing k(mν, a, λ) and fitting to the Pearson IV distribution can be obtained by computing the first four moments of the data, and substituting to the following system of equations: r ¼ ð2m−1Þ ¼
a¼
6ðβ2 −β1 −1Þ 2β2 −3β1 −6
i pffiffiffiffiffiffih μ 2 16ðr−1Þ−β1 ðr−2Þ2 4
pffiffiffiffiffiffi r ðr−2Þ β1 ν ¼ − qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 16ðr−1Þ−β1 ðr−2Þ2 λ ¼ bx > −
pffiffiffiffiffiffipffiffiffiffiffiffi ðr−2Þ β1 μ 2 4
ð4aÞ
ð4bÞ ð4cÞ
ð4dÞ
where the two moment ratios, β12 = μ32/μ23 and β2 = μ4/μ22 denote skewness and kurtosis respectively. The last system of equations has been applied in financial time series by Bhattacharyya, Chaudhary, and Yadav (2008), Brännäs and Nordman (2003), and Premaratne and Bera (2005). Since the variable domain of the Pearson type-IV distribution is (−∞, + ∞), Chen and Nie (2008) proposed a lognormal sum approximation using a variant of the Pearson distribution to account for the (0, + ∞) domain. Ashton and Tippett (2006) derived the Pearson type-IV distribution from a stochastic differential equation with standard Markov properties, and they commented on the distributional properties on selected time series. Grigoletto and Lisi (2009, 2011) incorporated constant and dynamic conditional skewness and kurtosis into a GARCH-type structure with the Pearson type-IV
12
S. Stavroyiannis et al. / International Review of Financial Analysis 22 (2012) 10–17
distribution, and they performed in and out of sample VaR with the Kupiec and Christoffersen tests. Due to the mathematical difficulty, the computational complexity, and the fact that the information regarding this distribution is scattered in the literature, the system has not yet attracted attention in the econometric literature since, to keep the ARCH tradition it is important to express the density in terms of the mean and of the variance, in order to acquire a distribution with zero mean and unit variance. According to Pearson (1895) and Nagahara (1999, 2004, 2007) the normalization constant is given by: kðm; ν; a; λÞ ¼
2
2 Γ ðm þ iν=2Þ2 jΓ ðm þ iν=2Þj Γ ðmÞ ð5Þ p ffiffiffi ¼ Γ ðmÞ πaΓ ðm−1=2Þ πajΓ ð2m−1Þj2
m−2
where a complex Gamma function is introduced. Nagahara (1999) suggested that the square of the absolute value of the ratio of the Gamma functions can be calculated as: 2 −1 ∞ Γ ðx þ iyÞ2 y ¼ ∏ 1þ : Γ ðxÞ xþn n¼0
ð6Þ
On the other hand, Heinrich (2004) argued that Eq. (6) is in practice too computational and time-intensive to be used for large y, even when only moderate precision is required, and noticed that: Γ ðx þ iyÞ2 1 Γ ðxÞ ¼ F 2 ð−iy; iy; x; 1Þ 2 1
ð7Þ
at the cost of using a complex Gauss hypergeometric function 2F1(a, b ; c ; z), available in only a couple of software and packages where the specific calculation is very slow. Using the relation Γ(z + 1) = zΓ(z), the series in Eq. (6) can be replaced by, y2 −1 Γ ðx þ 1 þ iyÞ2 Γ ðx þ iyÞ2 Γ ðxÞ ¼ 1 þ x Γ ðx þ 1Þ 2 y2 −1 2 −1 y Γ ðx þ 2 þ iyÞ ¼ … ð8Þ ¼ 1þ 1þ Γ ðx þ 2Þ x xþ1 where a workable strategy for small x would be to calculate 2F1(− iy, iy ; x ; 1) using the above series in Eq. (8), for some exponent n sufficiently large, and work down to n = 0. An alternative closed form expression for the type-IV distribution and the normalization constant was proposed recently by Willink (2008). Heinrich (2004) also demonstrated that the cumulative distribution function F(x), needed to calculate the constants at the confidence levels, can be expressed in terms of the hypergeometric function and shows that: ! F ðxÞ a x−λ 2
¼ i− F 1; m þ iν=2; 2m; : f ðxÞ 2m−1 a 2 1 1−i x−λ a
ð9Þ
3. Econometric methodology 3.1. The data If the value of an asset has been recorded for a sufficient time, a common way to analyze the time evolution of the returns is successive differences of the natural logarithm of price, R(t + Δt) = ln(S(t + Δt)/S(t)) × 100. The merit of this approach is that the average correction of scale changes is incorporated without requiring deflators or discounting factors, but in contrast to other choices, a nonlinear transformation is used (Stavroyiannis, Makris, & Nikolaidis, 2010). The time series under consideration are the major historical indices up to 31-Dec-2010 that is, DJIA from Oct-1-1928 consisting of 20655 observations, NASDAQ from Feb-5-1971 consisting of 10067 observations, S&P500 from Jan-3-1950 consisting of 15384 observations, FTSE100 from Apr-2-1984 consisting of 6758 observations, CAC from Mar-1-1990 consisting of 5268 observations, and DAX from Nov-26-1990 consisting of 5079 observations therefore, examining several extended financial time series of varying magnitude and including different historical crashes. Typical descriptive statistics of the data are shown in Table 1. 3.2. The model We consider a GARCH(1,1) procedure where the innovation process is the Pearson type-IV distribution: r t ¼ μ þ ε t ¼ μ þ σ t zt ; 2
2
ia x−λ 2 1þ F ðm; ν; a; λÞ ¼ f ðm; ν; a; λÞ 2m−iν−2 a ν 1 þ iðx−λÞ=a 1 2 þ 1; 2−2m; 2−m þ i ; 2F1 2 2 1− exp½−πðν þ i2mÞ ð10Þ at the cost again of using complex hypergeometric functions in Eqs. (9) and (10).
2
σ t ¼ ω þ aεt−1 þ βσ t−1 :
ð11Þ ð12Þ
The normalized Pearson type-IV distribution is realized as:
ν 2 σ^ Γ mþ1 Γ mþ1 1 −1 2 þi2
mþ1
P ðxÞ ¼ pffiffiffi 2m mþ1 exp −ν tan x 2 πΓ 2 Γ 2 1þx 2
ð13Þ
where, x ¼ σ^ z þ μ^
ð14Þ
ν m−1 " # 1 ν2 2 ^ 1þ σ ¼ : m−2 ðm−1Þ2
μ^ ¼ −
ð15Þ
ð16Þ
Optimization is performed via maximization of the logarithm of the mle, mle ¼
N X t¼1
The Gauss hypergeometric absolutely inside pffiffiffi function converges pffiffiffi the unit circle where xbλ−a 3, if x > λ−a 3 the symmetry identity F(x|m, ν, a, λ) ≡ 1p −ffiffiffiF(−x|m, − ν, a, λ) can be applied, and in the case that jx−λjba 3 a “linear transformation” as described in Abramowitz and Stegun (1965) can be used:
zePearson IVð0; 1; m; ν Þ
8 9 < σ^ Γ mþ1 Γ mþ1 þ i ν2 = 1 −1 2
mþ1 2
pffiffiffi 2 m : ln exp −ν tan x t 1 þ x2 mþ1 :σ t π Γ 2 Γ 2 ; 2 t
ð17Þ In Eq. (13) the tail coefficient of Eq. (3) has been replaced with a Student-t like version of (m + 1)/2, for convergence issues of the complex Gamma function. 2 The results of the optimization, the constant in mean μ, the constant in variance ω, the ARCH term α, the GARCH term β, the persistence of the model (ARCH + GARCH terms), the tail coefficient m, the asymmetry coefficient ν, and the 2 The squared ratio of the Gamma functions in Eq. (13) was calculated by transcribing the C++ source code (Heinrich, 2004) to Matalb®, and the constants at the confidence intervals were computed via numerical integration of the pdf using a Lobatto quadrature (Gander & Gautschi, 2000). The t-statistics in the Pearson type-IV case indicate the standard error, computing the Hessian of the Lagrangian for the unconstrained optimization at the optimal vector solution of the coefficients.
S. Stavroyiannis et al. / International Review of Financial Analysis 22 (2012) 10–17
13
Table 1 Descriptive statistics.
Observations Mean Variance Skewness Kurtosis
CAC
DAX
DJIA
FTSE
NASDAQ
S&P
5268 0.01387 2.01342 − 0.01174 7.79679
5079 0.03085 2.12391 − 0.09024 7.95389
20,655 0.01877 1.35308 − 0.59368 27.68531
6758 0.02475 1.26344 − 0.38692 11.79733
10,067 0.03256 1.58616 − 0.28004 13.07590
15,348 0.02817 0.94492 − 1.05797 32.13443
value of the mle, are shown in Table 2, and in the parentheses are the associated t-statistics. Thereinafter, if the results are better than the skewed Student-t distribution they appear in bold fonts, if the results are worst they appear in regular fonts, and if the results are equal they appear in bold italic fonts. It is observed that the Pearson typeIV distribution improves the mle in most of the cases. 4. Value-at-risk and backtesting Having estimated the unknown parameters of the model, the VaR for the a-percentile of the assumed distribution can be calculated straightforward using the equation VaR(a) = μ + F − 1(.)σ (Tang & Shieh, 2006) which under the Pearson type-IV distribution for the long and short positions is: VaRlong ¼ μ þ
−1 F Pearson IV ð1−aÞσ −1
VaRshort ¼ μ þ F Pearson IV ðaÞσ
ð18Þ ð19Þ
where at F − 1(.), the inverse of the cumulative distribution function at the specific confidence level is understood. A typical VaR for the historical S&P500 index for the long (lower envelope, red line) and short (upper envelope, green line) positions at the 95% confidence level is shown in Fig. 1. Each time an observation exceeds the VaR border it is called a VaR violation, or VaR breech, or VaR break. Verifying the accuracy of risk models used in setting the market risk capital requirements demands backtesting (Diamandis, Drakos, Kouretas, & Zarangas, 2011; Drakos, Kouretas, & Zarangas, 2010; McMillan & Kambouroudis, 2009), and over the last decade a variety of tests have been proposed that can be used to investigate the fundamental properties of a proposed VaR model. The accuracy of these VaR estimates is of concern to both financial institutions and their regulators. As noted by Diebold & Lopez (1996), it is unlikely that forecasts from a model will exhibit all the properties of accurate forecasts. Thus, evaluating VaR estimates solely upon whether a specified property is present may yield only limited information regarding their accuracy (Huang & Lin, 2004). In this work we consider five accuracy measures; the success–failure ratio, the Kupiec LR-test, the Christoffersen independence and conditional coverage tests, the expected shortfall with related measures, and the dynamic quantile test of Engle and Manganelli.
4.2. Kupiec LR test However, it is rarely the case that the exact amount suggested by the confidence level is observed therefore, it comes down to whether the number of violations is reasonable or not before a model is accepted or rejected. The most widely known test based on failure rates is the proportion of failures (POF) by Kupiec (1995). Measuring whether the number of violations is consistent with the confidence level, under null hypothesis that the model is correct the number of violations follows the binomial distribution. The Kupiec test (unconditional coverage) is best conducted as a likelihood-ratio (LR) test where the test statistics takes the form: ! px ð1−pÞT−x 2 LRPOF ¼ −2 ln x x
x T−x eχ ð1Þ 1− T T
where, T is the total number of observations, x is the number of violations, and p is the specified confidence level. Under the null hypothesis that the model is correct, LRPOF is asymptotically χ2 distributed with one degree of freedom. If the value of the LRPOF-statistics exceeds the critical value of the χ2 distribution, the null hypothesis is rejected and the model is considered to be inaccurate. Therefore, the risk model is rejected if it generates too many or too few violations; however, based on that assumption a model that generates dependent exceptions can be also accepted as accurate. The results of the p-value of the Kupiec LR-test are shown in Table 4, where the Pearson type-IV distribution appears to give better results for all cases. 4.3. Christoffersen independence test and conditional coverage In order to check whether the exceptions are spread evenly over time or they form clustering, the Christoffersen (1998) interval forecast test (conditional coverage) is used. Assigning an indicator that takes the value of 1 if VaR is exceeded and 0 otherwise, It ¼
1 0
if violation occurs if no violation occurs
Table 2 Pearson-IV GARCH model (optimization results).
4.1. Success–failure ratio μ
A typical way to examine a VaR model is to count the number of VaR violations when portfolio losses exceed the VaR estimates. An accurate VaR approach produces a number of VaR breaks as close as possible to the number of VaR breaks specified by the confidence level. If the number of violations is more than the selected confidence level would indicate then the model underestimates the risk. On the other hand, if the number of violations is less, then the model overestimates the risk. The test is conducted as x/T, where T is the total number of observations, and x is the number of violations for the specific confidence level. The results are shown in Table 3 indicating that the Pearson type-IV distribution overperforms in all cases.
ð24Þ
ω α β m ν α+β mle
CAC
DAX
DJIA
FTSE 100
NASDAQ
S&P
0.0421 (2.8117) 0.0201 (4.2019) 0.0756 (9.5948) 0.9142 (104.88) 11.5899 (7.3530) 1.6313 (3.1616) 0.9898 − 8522.8
0.0610 (4.2225) 0.0146 (3.9023) 0.0815 (9.5718) 0.9126 (104.77) 9.1864 (8.9135) 1.3842 (3.7517) 0.9941 − 8115.0
0.0388 (7.3959) 0.0066 (7.5973) 0.0657 (16.974) 0.9289 (235.56) 6.6390 (22.713) 0.6457 (6.3830) 0.9946 − 26,245.4
0.0449 (4.4690) 0.0144 (5.2191) 0.0849 (11.268) 0.9021 (106.35) 14.0160 (7.3086) 2.7783 (3.8899) 0.9870 − 9105.7
0.0553 (7.1786) 0.0067 (5.3459) 0.0952 (13.717) 0.9035 (136.95) 8.0972 (13.364) 2.4172 (7.8289) 0.9987 − 13,212.8
0.0446 (7.9927) 0.0054 (6.1529) 0.0707 (14.919) 0.9246 (193.21) 7.0679 (18.325) 0.6710 (5.2822) 0.9954 − 17,930.0
14
S. Stavroyiannis et al. / International Review of Financial Analysis 22 (2012) 10–17
Fig. 1. VaR for the historical S&P index for the long and short position at the 95% confidence level.
and defining nij the number of days where the condition j occurred assuming that condition i occurred the previous day, the results can be displayed in a contingency 2 × 2 table. Letting πi represent a probability of observing a violation conditional on state i on the previous day, π0 = n01/(n00 + n01), π1 = n11/(n10 + n11), and π = (n01 + n11)/(n00 + n01 + n10 + n11), the violation independence under the null hypothesis should state that π0 = π1. The relevant test statistics for independence of violations is a likelihood ratio:
LRind ¼ −2 ln
2
ð1−πÞn00 þn10 πn01 þn11 2 χ ð1Þ n n ð1−π0 Þn00 π001 ð1−π 1 Þn10 π 111 e
ð25Þ
and is asymptotically χ 2 distributed with one degree of freedom. In the case where n11 = 0, indicating no violation clustering, either due to few observations or rather high confidence levels, the test is conducted as (Christoffersen & Pelletier, 2004): n00
LRind ¼ ð1−π0 Þ
n
π001
LRCC ¼ LRPOF þ LRind eχ ð2Þ
ð26Þ
and the results are shown in Table 6. In this case the Pearson type-IV distribution gives better results due to its superiority at the Kupiec part of the test, especially at the high confidence levels which are mostly of interest. 4.4. Expected shortfall and tail measures Since the above tests are not a coherent measure of risk in the sense of Artzner, Delbaen, Eber, and Heath (1997, 1999) in situations
Table 3 Success/failure ratio. CAC
which discards as a result the NaNs (not a number) that appear in several works in the literature. The results of the Christoffersen independence test are shown in Table 5 where both distributional schemes appear to behave accurately. Joining the two criteria, the Kupiec test and the Christoffersen independence test, the Christoffersen conditional coverage (CC) is achieved. The test statistics for conditional coverage is asymptotically χ 2 distributed with two degrees of freedom:
Table 4 VaR results (in sample): Kupiec test. DAX
DJIA
FTSE 100
NASDAQ
Short position 0.95 0.95406 0.975 0.97874 0.99 0.99146 0.995 0.99487 0.9975 0.99791 0.999 0.99924
0.94881 0.97972 0.99252 0.99567 0.99783 0.99921
0.95062 0.97700 0.99182 0.99579 0.99797 0.99947
0.95265 0.97899 0.99260 0.99586 0.99689 0.99882
0.95093 0.97586 0.99017 0.99533 0.99752 0.99940
Long position 0.05 0.05429 0.025 0.02601 0.01 0.00873 0.005 0.00342 0.0025 0.00247 0.001 0.00114
0.05513 0.02441 0.00925 0.00394 0.00236 0.00138
0.05316 0.02687 0.00992 0.00552 0.00266 0.00116
0.05164 0.02471 0.00977 0.00533 0.00222 0.00104
0.05404 0.02344 0.00944 0.00427 0.00219 0.00119
S&P 500
DAX
DJIA
FTSE
NASDAQ
S&P
0.95068 0.97570 0.99140 0.99629 0.99837 0.99935
Short position 0.95 0.17043 0.975 0.07449 0.99 0.27539 0.995 0.89784 0.9975 0.53767 0.999 0.56363
CAC
0.69797 0.02597 0.05905 0.48940 0.62540 0.61872
0.68338 0.06165 0.00670 0.09884 0.16510 0.01953
0.31364 0.03091 0.02424 0.30340 0.33546 0.64237
0.66804 0.57757 0.86679 0.63367 0.97330 0.16524
0.69949 0.57839 0.07420 0.01791 0.02094 0.14445
0.05356 0.02750 0.00899 0.00424 0.00241 0.00111
Long position 0.05 0.15854 0.025 0.64214 0.01 0.34443 0.005 0.08391 0.0025 0.96251 0.001 0.75496
0.09872 0.78839 0.58832 0.26490 0.84319 0.42016
0.03912 0.08892 0.91358 0.29809 0.64295 0.47291
0.53765 0.87901 0.84624 0.70609 0.63789 0.92623
0.06636 0.31199 0.56640 0.28771 0.51818 0.55421
0.04549 0.05126 0.20137 0.16759 0.82371 0.67840
S. Stavroyiannis et al. / International Review of Financial Analysis 22 (2012) 10–17 Table 5 VaR results (in sample): Christoffersen independence test. CAC
DAX
DJIA
FTSE
15
Table 7 Expected shortfall ESF1 measures. NASDAQ
Short position 0.95 0.4920785 0.975 0.6935275 0.99 0.4035047 0.995 0.1303848 0.9975 1.0000000 0.999 0.9999999
0.0997197 0.3933637 1.0000000 1.0000000 1.0000000 0.9999999
0.3350867 0.3605460 0.7299570 1.0000000 1.0000000 1.0000000
0.3788674 0.5673320 0.3881682 1.0000000 1.0000000 1.0000000
0.0481744 0.0221592 0.0032786 0.2209170 0.0526796 1.0000000
Long position 0.05 0.5168807 0.025 0.2253621 0.01 1.0000000 0.005 1.0000000 0.0025 1.0000000 0.001 1.0000000
0.2098376 0.5844987 0.0793088 0.0681646 0.0207843 1.0000000
0.0000001 0.0073123 0.0057938 0.1661345 0.0087343 0.0213884
0.0201103 0.6695088 0.1709148 0.0147370 1.0000000 1.0000000
0.0002235 0.0018464 0.0145943 0.1811938 0.0392573 1.0000000
S&P
CAC
DAX
DJIA
FTSE
NASDAQ
S&P
0.3402854 0.7115117 0.4595267 1.0000000 1.0000000 1.0000000
Short position 0.95 2.8234 0.975 3.5074 0.99 4.2455 0.995 4.4602 0.9975 5.4388 0.999 7.4961
2.6994 3.4360 4.4066 4.8610 4.4006 4.9021
2.1050 2.5339 3.3638 4.0416 4.3763 5.9937
2.1651 2.5521 2.9014 3.3101 3.6501 4.2684
2.1841 2.5759 3.0517 3.4917 4.2601 4.0097
1.8574 2.1947 2.6226 2.9883 3.1033 3.3413
0.0000002 0.0004111 0.5273324 0.0324280 1.0000000 1.0000000
Long position 0.05 − 2.8269 0.025 − 3.2156 0.01 − 3.6328 0.005 − 4.5369 0.0025 − 5.0064 0.001 − 5.6012
− 2.8116 − 3.3884 − 3.6077 − 4.3840 − 4.7833 − 5.0131
− 2.2059 − 2.6463 − 3.4318 − 3.9086 − 4.6240 − 6.2976
− 2.2174 − 2.6420 − 3.2951 − 3.6449 − 4.1175 − 5.3744
− 2.2964 − 2.5810 − 3.0981 − 3.7269 − 4.5051 − 4.3883
− 1.8425 − 2.2098 − 2.8154 − 3.6290 − 4.4588 − 5.8439
where the return distribution exhibits fat tails, expected shortfall (ES) becomes a good alternative to VaR (Scaillet, 2000). Expected shortfall is a coherent measure of risk and it is defined as the expected value of the losses conditional on the loss being larger than the VaR. The expected shortfall associated with a confidence level 1 − p denoted as μp, is the conditional expectation of a loss given that the loss is larger than νp, that is:
the conditional efficiency hypothesis. Let Hit(a) = It(a) − a be the demeaned process on a associated to, It(a): ( 1−a; ifr t bVaR ðaÞ t t−1 Hit t ðaÞ ¼ : ð28Þ −a; else
μ p ¼ EðY t jY t > νp Þ:
Hit t ðaÞ ¼ δ þ
ð27Þ
The average multiple of tail event to risk measure measures the degree to which events in the tail of the distribution typically exceed the VaR measure by calculating the average multiple of these outcomes to their corresponding VaR measures (Hendricks, 1996) and these measure are labeled ESF1 (Table 7) and ESF2 (Table 8) respectively. The Pearson type-IV distribution appears to behave better for the short position for the ESF1 measure and the performance appears to be inconclusive for the ESF2 measure however, according to Giot and Laurent (2003), “It should be stressed that the expected shortfall is not a tool to rank VaR models or assess models' performances. Nevertheless it is useful for risk managers as it answers the following question: ‘when my model fails, how much do I lose on average?’”
Considering the following regression model,
þ
Table 6 VaR results (in sample): Christoffersen conditional coverage test. CAC
DAX
DJIA
FTSE
NASDAQ
Short position 0.95 0.3086490 0.975 0.1885728 0.99 0.3891548 0.995 0.3159407 0.9975 0.8270093 0.999 0.8464306
0.2392357 0.0582334 0.1683102 0.7874752 0.8876516 0.8835260
0.5782548 0.1148407 0.0238677 0.2561339 0.3815722 0.0654425
0.4086738 0.0827058 0.0544235 0.5888398 0.6288584 0.8977796
0.1295227 0.0625549 0.0130778 0.4219942 0.1529588 0.3818213
Long position 0.05 0.2999568 0.025 0.4304496 0.01 0.6395886 0.005 0.2245344 0.0025 0.9988959 0.001 0.9524639
0.1165612 0.8306641 0.1852600 0.1017929 0.0677479 0.7225750
0.0000001 0.0064465 0.0220914 0.2231090 0.0288503 0.0547478
0.0555150 0.9024211 0.3843304 0.0475998 0.8951593 0.9957227
0.0002042 0.0047045 0.0429907 0.2324537 0.0969454 0.8395304
k¼1 K X
βk Hit t−k ðaÞ
ð29Þ
γk g ½Hit t−k ðaÞ; Hit t−k−1 ðaÞ; …; zt−k ; zt−k−1 ; :: þ εt
k¼1
where εt is an i.i.d. process and where g(.) is a function of past violations and of variables zt − k, from the available information set Ωt − 1. Whatever the chosen specification, the null hypothesis test of conditional efficiency corresponds to testing the joint nullity of coefficients, βk, γk, and of constant δ: H 0 : δ ¼ βk ¼ γk ¼ 0;∀k ¼ 1; …; K: Therefore, the current VaR violations are uncorrelated to past violations since βk =γk =0 (consequence of the independence hypothesis), whereas the unconditional coverage hypothesis is verified when δ=0. The Wald statistics noted DQCC, in association with the test of conditional efficiency hypothesis then verifies,
4.5. Dynamic quantile test of Engle–Manganelli Engle and Manganelli (1999, 2004) suggest using a linear regression model linking current violations to past violations so as to test
K X
DQ CC ¼
^ ′ Z ′ ZΨ L Ψ 2 → χ ð2K þ 1Þ: að1−aÞ T→∞
ð30Þ
Table 8 Expected shortfall ESF2 measures. S&P
DAX
DJIA
FTSE
NASDAQ
S&P
0.5890888 0.8002677 0.1545437 0.0606546 0.0695413 0.3447514
Short position 0.95 1.2748 0.975 1.2394 0.99 1.1951 0.995 1.1492 0.9975 1.1749 0.999 1.2360
CAC
1.2530 1.2421 1.2461 1.2096 1.2146 1.2883
1.3107 1.2615 1.2328 1.1980 1.1777 1.2569
1.2513 1.2201 1.2366 1.2323 1.1707 1.1930
1.2669 1.2216 1.1821 1.1720 1.1541 1.3000
1.3046 1.2391 1.2006 1.2072 1.2169 1.2203
0.0000002 0.0002915 0.3620549 0.0391621 0.9754898 0.9176237
Long position 0.05 1.3424 0.025 1.2783 0.01 1.2845 0.005 1.3936 0.0025 1.3411 0.001 1.4098
1.3647 1.3319 1.3331 1.4471 1.4874 1.5360
1.4272 1.3521 1.3554 1.3321 1.3798 1.4512
1.3453 1.2874 1.2497 1.2165 1.2562 1.2929
1.4056 1.3642 1.3002 1.3276 1.3384 1.2943
1.4089 1.3242 1.3501 1.4019 1.4203 1.5117
16
S. Stavroyiannis et al. / International Review of Financial Analysis 22 (2012) 10–17
Table 9 VaR results (in sample): dynamic quantile test. CAC Short position 0.95 0.21618 0.975 0.47973 0.99 0.69466 0.995 0.42136 0.9975 0.99724 0.999 0.99870
DAX
DJIA
FTSE
NASDAQ
S&P
0.58741 0.030510 0.44258 0.98499 0.99898 0.99946
0.0022899 0.072868 0.10203 0.55178 0.85150 0.20317
0.35364 0.12477 0.18513 0.93732 0.97985 0.99973
0.39202 0.10934 0.0036217 0.35640 0.023829 0.83635
0.011511 0.039415 0.55545 0.24213 0.28817 0.82194
0.028374
1.7807e 7.0733e −005 −011 0.0047229 8.2020e − 007 0.027453 0.010913
Long position 0.05 0.016335
0.0019891 5.7732e − 015 0.025 0.46421 0.51104 1.0404e −008 0.01 0.0015630 0.21359 2.2824e −006 0.005 0.65069 0.047421 0.27206 0.0025 0.99991 6.1832e 3.6825e −006 − 010 0.001 0.99996 0.99678 4.8777e −006
0.079073 0.32759
0.00020447 0.46282 0.016145 6.3125e 0.0031165 0.99779 − 005 0.00000 0.99898 0.99968
The results are shown in Table 9, taking a value for K = 5, where the Pearson type-IV distribution appears to handle the situation better especially at the high confidence levels. 5. Conclusion In this paper we have elaborated on the use of the Pearson type-IV distribution in econometric modeling. On the average, it appears to give better VaR results especially at the high confidence levels, compared with the skewed Student-t distribution, and the value of the mles for the description of the pdf of financial time series returns is improved. The results of our work are in reasonable agreement with the recent results by Grigoletto and Lisi (2009, 2011), with respect to the Kupiec and Christoffersen VaR tests. The Pearson type-IV distribution provides a sound candidate as an alternative distributional scheme that might be used alongside with the skewed Student-t distribution. Upon integration and optimization of the computational efficiency of our source codes, an out of sample VaR tests that might enhance the validity of the model is in order. Acknowledgments The authors would like to thank the editor and the anonymous referees for their constructive comments and suggestions. The usual caveat applies. References Abramowitz, M., & Stegun, I. A. (1965). Handbook of mathematical functions with formulas, graphs, and mathematical table. Courier Dover Publications. Artzner, P., Delbaen, F., Eber, J. -M., & Heath, D. (1997). Thinking coherently. Risk, 10, 68–71. Artzner, P., Delbaen, F., Eber, J. -M., & Heath, D. (1999). Coherent measures of risk. Mathematical Finance, 9, 203–228. Ashton, D., & Tippett, M. (2006). Mean reversion and the distribution of United Kingdom stock index returns. Journal of Business Finance and Accounting, 33, 1586–1609. Baillie, R. T., Bollerslev, T., & Mikkelsen, H. O. (1996). Fractionally integrated generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 74, 3–30. Basle Committee of Banking Supervision (1996). Supervisory framework for the use of “backtesting” in conjunction with the internal models approach to market risk capital requirements. Available at. www.bis.org. Basle Committee of Banking Supervision (2006). International convergence of capital measurement and capital standards — A revised framework, comprehensive version. (Available at). www.bis.org. Bhattacharyya, M., Chaudhary, A., & Yadav, G. (2008). Conditional VaR estimation using the Pearson's type IV distribution. European Journal of Operational Research, 191, 386–397.
Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31, 307–327. Bollerslev, T. (1987). A conditional heteroskedastic time series model for speculative prices and rates of return. The Review of Economics and Statistics, 69, 542–547. Bollerslev, T., & Mikkelsen, H. O. (1996). Modeling and pricing long-memory in stock market volatility. Journal of Econometrics, 73, 151–184. Brännäs, K., & Nordman, N. (2003). Conditional skewness modeling for stock returns. Applied Economics Letters, 10, 725–728. Chen, S., & Nie, H. (2008). Lognormal sum approximation with a variant of type IV Pearson distribution. IEEE Communication Letters, 12, 630–632. Christoffersen, P. (1998). Evaluating interval forecasts. International Economic Review, 39, 841–862. Christoffersen, P., & Pelletier, D. (2004). Backtesting value-at-risk: A duration-based approach. Journal of Financial Econometrics, 2, 84–108. Chung, C. F. (1999). Estimating the fractionally integrated GARCH model. National Taiwan University, working paper. Davidson, J. (2004). Moment and memory properties of linear conditional heteroscedasticity models, and a new model. Journal of Business and Economic Statistics, 22, 16–29. De Grauwe, P. (2009). The banking crisis: causes, consequences and remedies. Universite Catholique de Louvain, mimeo. Diamandis, P. F., Drakos, A. A., Kouretas, G. P., & Zarangas, L. (2011). Value-at-risk for long and short trading positions: Evidence from developed and emerging equity markets. International Review of Financial Analysis, 20, 165–176. Diebold, F. X., & Lopez, J. A. (1996). Forecast Evaluation and Combination. In G. S. Maddala, & C. R. Rao (Eds.), Handbook of Statistics (pp. 241–268). Amsterdam North-Holland. Ding, Z., Granger, C. W. J., & Engle, R. F. (1993). A long memory property of stock market returns and a new model. Journal of Empirical Finance, 1, 83–106. Doornik, J. A. (2009). An introduction to OxMetrics 6. A software system for data analysis and forecasting (First Edition). Timberlake Consultant Limited. Drakos, A. A., Kouretas, G. P., & Zarangas, L. P. (2010). Forecasting financial volatility of the Athens stock exchange daily returns: An application of the asymmetric normal mixture Garch model. International Journal of Finance & Economics, 15, 331–350. Engle, R. F. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance of U.K. inflation. Econometrica, 50, 987–1008. Engle, R. F., & Bollerslev, T. (1986). Modelling the persistence of conditional variances. Econometric Reviews, 5, 1–50. Engle, R., & Manganelli, S. (1999): CAViaR: conditionalautoregressive value at risk by regression quantiles, Mimeo, San Diego, Department of Economics. Engle, R. F., & Manganelli, S. (2004). CAViaR: Conditional autoregressive value-at-risk by regression quantiles. Journal of Business and Economic Statistics, 22, 367–381. Fama, E. F. (1965). The behavior of stock market prices. Journal of Business, 38, 34–105. Fernandez, C., & Steel, M. (1998). On Bayesian modelling of fat tails and skewness. Journal of the American Statistical Association, 93, 359–371. Fielitz, B. D. (1971). Stationary or random data: Some implications for the distribution of stock price changes. Journal of Financial and Quantitative Analysis, 6, 1025–1034. Gander, W., & Gautschi, W. (2000). Adaptive quadrature — Revisited. BIT, 40, 84–101. Giot, P., & Laurent, S. (2003). Value-at-risk for long and short trading positions. Journal of Applied Econometrics, 18, 641–664. Glosten, L., Jagannathan, R., & Runkle, D. (1993). On the relation between the expected value and the volatility of the nominal excess return on stocks. Journal of Finance, 48, 1779–1801. Grigoletto, M., & Lisi, F. (2009). Looking for skewness in financial time series. The Econometrics Journal, 12, 310–323. Grigoletto, M., & Lisi, F. (2011). Practical implications of higher moments in risk management. Statistical Methods and Applications, 20, 487–506. Heinrich, J. (2004). A guide to the Pearson type IV distribution. CDF/MEMO/STATISTICS/ PUBLIC/6280. Hendricks, D. (1996). Evaluation of value-at-risk models using historical data. Federal Reserve Bank of New York, Economic Policy Review, 2, 39–70. Huang, Y. C., & Lin, B. J. (2004). Value-at-risk analysis for Taiwan Stock Index Futures: Fat tails and conditional asymmetries in return innovations. Review of Quantitative Finance and Accounting, 22, 79–95. Jorion, P. (2000). Value-at-risk: The new benchmark for managing financial risk. New York McGraw-Hill. Kupiec, P. H. (1995). Techniques for verifying the accuracy of risk measurement models. Journal of Derivatives, 3, 73–84. Lambert, P., & Laurent, S. (2000). Modelling skewness dynamics in series of financial data. Discussion Paper. Louvain-la-Neuve Institut de Statistique. Laurent, S. (2009). Estimating and forecasting ARCH models using G@RCH 6. London Timberlake Consultants Limited. Laurent, S., & Peters, J. -P. (2002). G@RCH 2.2, an Ox package for estimating and forecasting various ARCH models. Journal of Economic Surveys, 16, 447–485. Magdalinos, M. A., & Mitsopoulos, M. P. (2007). Conditional heteroskedasticity models with Pearson family disturbances. In G. S. A. Phillips, & E. Tzavalis (Eds.), The refinement of econometric estimation and test procedures, finite sample and asymptotic analysis (pp. 1–33). Cambridge University Press. Mandelbrot, B. (1963). The variation of certain speculative prices. Journal of Business, 36, 394–419. Mandelbrot, B. (1967). The variation of some other speculative prices. Journal of Business, 40, 393–413. McMillan, D. G., & Kambouroudis, D. (2009). Are risk metrics forecasts good enough? Evidence from 31 stock markets. International Review of Financial Analysis, 18, 117–124. Morgan, J.P./Reuters. (1996). RiskMetrics(TM) - Technical Document. New York Fourth Edition.
S. Stavroyiannis et al. / International Review of Financial Analysis 22 (2012) 10–17 Nagahara, Y. (1999). The PDF and CF of Pearson type IV distributions and the ML estimation of the parameters. Statistics and Probability Letters, 43, 251–264. Nagahara, Y. (2004). A method of simulating multivariate nonnormal distributions by the Pearson distribution system. Computational Statistics & Data Analysis, 47, 1–29. Nagahara, Y. (2007). A method of calculating the downside risk by multivariate nonnormal distributions. Asian Pacific Financial Markets, 15, 175–184. Nelson, D. (1991). Conditional heteroskedasticity in asset returns: A new approach. Econometrica, 59, 347–370. Pearson, K. (1893). Contributions to the mathematical theory of evolution [abstract]. Proceedings of the Royal Society of London, 54, 329–333. Pearson, K. (1895). Contributions to the mathematical theory of evolution, II: Skew variation in homogeneous material. Philosophical Transactions of the Royal Society of London A, 186, 343–414. Pearson, K. (1901). Mathematical contributions to the theory of evolution.—X. Supplement to a memoir on skew variation. Philosophical Transactions of the Royal Society of London A, 197, 443–459. Pearson, K. (1916). Mathematical contributions to the theory of evolution.—XIX. Second supplement to a memoir on skew variation. Philosophical Transactions of the Royal Society of London A, 216, 429–457. Premaratne, G., & Bera, A. (2005). A test for symmetry with leptokurtic financial data. Journal of Financial Econometrics, 3, 169–187.
17
Scaillet, O. (2000). Nonparametric Estimation and Sensitivity Analysis of Expected Shortfall. Mimeo, Université Catholique de Louvain, IRES. Stambaugh, F. (1996). Risk and value at risk. European Management Journal, 14, 612–621. Stavroyiannis, S. (2001). Structural and elastic properties of magnetron-sputtered fccCo/Au(111) magnetic multilayers exhibiting oscillatory giant magnetoresistance effect. Physica Scripta, 64, 605–611. Stavroyiannis, S. (2002). Structural and elastic properties of magnetron-sputtered fccCo/Au(1 1 1) multilayers as a function of Au layer thickness. Materials Science and Engineering B, 95, 195–201. Stavroyiannis, S., Makris, I., & Nikolaidis, V. (2010). Non-extensive properties, multifractality, and inefficiency degree of the Athens Stock Exchange General Index. International Review of Financial Analysis, 19, 19–24. Subbotin, M. T. (1923). On the law of frequency of error. Matematicheskii Sbornik, 31, 296–301. Tang, T. A., & Shieh, S. J. (2006). Long memory in stock index futures markets: A value-atrisk approach. Physica A, 366, 437–448. Tse, Y. K. (1998). The conditional heteroskedasticity of the yen–dollar exchange rate. Journal of Applied Econometrics, 193, 49–55. Willink, R. (2008). A closed-form expression for the Pearson type IV distribution function. Australian & New Zeeland Journal of Statistics, 50, 199–205.