Economic Modelling (xxxx) xxxx–xxxx
Contents lists available at ScienceDirect
Economic Modelling journal homepage: www.elsevier.com/locate/econmod
Generalized Method of Moment estimation of multivariate multifractal models ⁎
Ruipeng Liua, , Thomas Luxb a b
Department of Finance, Deakin Business School, Deakin University, Melbourne, VIC 3125, Australia Department of Economics, University of Kiel, Olshausenstraße 40, 24118, Kiel, Germany
A R T I C L E I N F O
A BS T RAC T
JEL classification: C20 G15
Multifractal processes have recently been introduced as a new tool for modeling the stylized facts of financial markets and have been found to consistently provide certain gains in performance over basic volatility models for a broad range of assets and for various risk management purposes. Due to computational constraints, multivariate extensions of the baseline univariate multifractal framework are, however, still very sparse so far. In this paper, we introduce a parsimoniously designed multivariate multifractal model, and we implement its estimation via a Generalized Methods of Moments (GMM) algorithm. Monte Carlo studies show that the performance of this GMM estimator for bivariate and trivariate models is similar to GMM estimation for univariate multifractal models. An empirical application shows that the multivariate multifractal model improves upon the volatility forecasts of multivariate GARCH over medium to long forecast horizons.
Keywords: Multivariate Multifractal Long memory GMM estimation
1. Introduction Multifractal (MF) processes have been recently introduced as a new tool for modeling the stylized facts of financial markets. In contrast to the additive structure of the seminal GARCH family of models, this new class of volatility models conceives volatility as a hierarchical, multiplicative process with heterogeneous components. The essential new feature of MF models is their ability of generating different degrees of long-term dependence in various powers of returns - a feature pervasively found in empirical financial data, cf. Lo (1991), Ding et al. (1993), Beran (1994), Lobato and Savin (1998), Zumbach (2004), among others. This feature also sets multifractal models apart from long memory models of the FIGARCH type that are unifractal by design. Research on multifractal models originated from statistical physics (Mandelbrot, 1974). Unfortunately, the models used in physics are of a combinatorial nature and suffer from non-stationarity due to their restriction to a bounded interval and the non-convergence of moments in the continuous-time limit. This major weakness of the early so-called multifractal model of asset returns (MMAR) proposed by Mandelbrot et al. (1997) has been overcome by the development of iterative versions of the multifractal approach in the econometrics literature, the Markov-switching multifractal model (MSM) proposed by Calvet and Fisher (2001, 2004) and the multifractal random walk proposed by Bacry et al. (2000). Various subsequent developments can be found, for example, in Lux (2008), Calvet et al. (2006) or Lux and
⁎
Morales-Arias (2010). Lux and Segnon (2016) provide an up-to-date review of variants of multifractal models, available estimation techniques and empirical applications. Although the multifractal model is a rather new tool in volatility modelling, various approaches have already been explored to estimate its parameters. The parameters of the combinatorial MMAR have been estimated via an adaptation of the scaling estimator and Legendre transformation approach from statistical physics although this approach has been shown to likely yield unreliable results for fat-tailed data subject to volatility clustering (the well-known stylized facts of financial data), cf. Lux (2004). Maximum likelihood (ML) estimation for Markov-switching multifractal models has been developed by Calvet and Fisher (2004), and Generalized Method of Moments (GMM) by Lux (2008). So far, available multifractal models are mostly univariate ones and only a few authors have explored bivariate models c.f. Bacry et al. (2000), Calvet et al. (2006), Idier (2011), Liu (2008) and Liu and Lux (2014). However, for many important questions in empirical research, multivariate settings (exceeding bivariate) are preferable, cf. Bollerslev (1990), Liesenfeld and Richard (2003). For instance, the extension of GARCH models to mutivariate settings provides a number of different specifications although most of them are highly parameterized, for details cf. Bauwens et al. (2006) and Tsay (2006). In this paper, we present a very parsimonious multivariate multifractal model with only a minimum of parameters. In the bivariate case our model can be
Corresponding author. E-mail address:
[email protected] (R. Liu).
http://dx.doi.org/10.1016/j.econmod.2016.11.010 Received 27 July 2016; Received in revised form 17 November 2016; Accepted 17 November 2016 0264-9993/ © 2016 Elsevier B.V. All rights reserved.
Please cite this article as: Liu, R., Economic Modelling (2016), http://dx.doi.org/10.1016/j.econmod.2016.11.010
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
hyperbolic decline of the autocorrelation of absolute powers over a finite horizon and exponential decline thereafter. In particular, approximately hyperbolic decline as expressed in Eq. (3) holds only over an interval 1 ≪ τ ≫ b k with b the parameter of the transition probabilities of Eq. (2) and k the number of hierarchical cascade levels.
viewed as a special case of the more complex approach of Calvet et al. (2006) but it can be more easily extended to trivariate settings and beyond. Our main contribution in this paper is the derivation of a set of moment conditions that allows easy and fast estimation of this multivariate model. The rest of this paper is organized as follows: Section 2 presents a brief review of multifractal models. Section 3 introduces a parsimonious multivariate multifractal model and details how it can be estimated via GMM. Monte Carlo simulations are conducted to assess the efficiency of the estimates. Section 4 provides an empirical application to a trivariate series of exchange rates. Concluding remarks are provided in Section 5. The Appendices A and B provides details of the analytical moment conditions.
3. Multivariate multifractal model 3.1. A parsimonious framework: volatility correlations without additional parameters One of the common motivations of extending univariate asset pricing models to multivariate ones is modeling the co-movements of volatility of different assets. Unlike the additive structure of the volatility dynamics of GARCH and stochastic volatility models, multifractal models conceive volatility as a hierarchical product of heterogeneous components. This feature allows us to decompose volatility into a hierarchical multiplicative sequence of volatility components with different frequencies. The range of these components can stretch from higher frequencies (daily or even intra-daily) to more persistent ones reflecting prevailing long term macroeconomic or other factors which might jointly affect different assets to varying degrees. Such common or idiosyncratic factors will be captured by joint or isolated components within the hierarchy of volatility factors. This particular construction allows us to model volatility correlations among assets without the need to introduce new parameters that would be hard to estimate.1 We can modulate the volatility correlations in this framework via the number of joint components. This is different from the approach of Calvet et al. (2006) who introduce two additional parameters capturing the probablity of joint arrivals of volatility innovations as well as the strength of volatility correlations within a bivariate MSM. While our model is nested as a special case in this more general approach, it has the advantage that it can easily be extended to higher order multivariate settings without having to cope with an increase in the number of parameters. This also distinguishes our approach from the multivariate multifractal random walk of Bacry et al. (2000) that comes with a full nxn matrix of additional parameters regulating the volatility dependence among n single time series. Let us consider an N-dimensional process governing asset returns evolving in discrete time over the interval [0, T ] with equally spaced discrete time points t = 1, …, T , and rt = (r1, …, rN )′:
2. Review of multifractal models Mandelbrot et al. (1997) introduced the multifractal model of asset returns (MMAR) adapting his 1974 model of cascades of energy flux in statistical physics to the dynamics of financial volatility. In physics, these “cascades” are typically modeled by multiplicative operations on probability measures, cf. Mandelbrot (1974) and Harte (2001). However, in a time series context the combinatorial nature of MMAR appears unfortunate and with the non-causal nature of the time transformation from chronological to “business” time one also inherits non-stationarity of the resulting process due to the inherent restriction to a bounded interval. These limitations have been overcome by the introduction of iterative versions of multifractal processes, the most seminal development being the Markov-switching multifractal model (MSM), cf. Calvet and Fisher (2001, 2004). In their approach, asset returns are modeled as:
⎛ k ⎞1/2 rt = σ ⎜⎜∏ Mt(i )⎟⎟ ·ut ⎝ i =1 ⎠
(1)
with ut drawn from a standard Normal distribution N(0, 1) and instantaneous volatility being determined by the product of k volatility components or multipliers Mt(1), Mt(2) …, Mt(k ) , and a constant scale parameter σ. Volatility components are renewed at time t with probability γi depending on their rank i within the hierarchy of multipliers or remain unchanged with probability 1 − γi . The transition probabilities are specified by Calvet and Fisher (2001, 2004) in a specific form that guarantees consistency between the discrete-time MSM and a continuous-time limiting multifractal process built upon a hierarchy of Poisson processes of volatility components. Convergence of the discrete model to its continuous-time counterpart holds if transition probabilities are specified as:
γi = 1 − (1 − γ1)(b
i−1)
,
for i = 1, 2, …k ,
rt = σ . *[g(Mt )]1/2 . *ut ,
where σ, ut are N × 1 vectors and .* denotes element by element multiplication, ut follows the multivariate standard Normal distribution with variance-covariance matrix Σ: σ is a vector of constant scale parameters and can be viewed as unconditional standard deviation. g(Mt ) is a N × 1 vector of the products of multifractal volatility components, i.e., g(Mt ) = [g(M1, t ), …, g(MN , t )]′:
(2)
with parameters γ1 ∈ [0, 1] and b ∈ (1, ∞). This iterative version of the multifractal model preserves the hierarchical structure of MMAR while dispensing with its restriction to a bounded interval. With Markovian structure, this model is completely “well-behaved” (i.e. it shares all the convenient properties of Markovswitching processes), and it is capable of capturing some important properties of financial time series, namely, volatility clustering and the power-law behaviour of the autocovariance function of absolute moments:
Cov(|rt|q , |rt + τ|q ) ∝ τ 2d (q )−1.
(4)
j
g(Mq , t ) =
k
∏ Mq(i,t). * ∏ i =1
Mq(l, t), M1,(it) = M2,(it) = …MN(i,)t ,
for 1 < i ≤ j ,
l = j +1
(5) Eq. (5) states that each element q = 1, …, N of g(Mt ) is the instantaneous volatility of a univariate multifractal process. Within this framework, we introduce volatility co-movements in a parsimonious way without any additional parameters assuming that the N time series share a number of j joint cascades that govern the strength of their volatility correlations. Consequently, the larger j, the higher the correlation between them. Factors responsible for co-movements of the
(3)
where the function d(q) indicates that different powers q of absolute returns are characterized by different hyperbolic decay factors of their autocovariances. It is worthwhile to note, however, that the power-law behavior of the MSM model holds only approximately in a preasymptotic range. Rather than displaying asymptotic power-law behavior of autocovariance functions in the limit t → ∞ or divergence of the spectral density at zero, the Markov-switching MF model is rather characterized by only ‘apparent’ long memory with an approximately
1 Our approach of allowing for different degrees of correlation at different frequencies is similar to studying such correlations via wavelet coherence analysis, cf. Ramsey (2002) for an introduction and Barunik et al. (2016) for a recent application.
2
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
ML at all because of their infinite state spaces. In order to reduce the computational burden, Calvet et al. (2006) also propose a simulation-based maximum likelihood approach using a particle filter. Instead of explicitly evaluating the 4k × 4k elements at the transition matrix, the particle filter uses an approximation to the prediction probability density P(Mt = mti|rt−1), by using the discrete support of a finite number B of particles. Denoting by m(b) the volatility state of any particle b = 1, …, B the one-step-ahead conditional probability is approximated by:
volatility of different markets or assets over long horizons could be macroeconomic factors, technology changes, political/natural disasters etc. After j joint multipliers, each series has additional independent multifractal components. Instead of introducing additional correlation parameters for the specification of new arrivals at hierarchy level i among different time series, our assumption of joint cascade level simplifies the characterization of the dependence structure of new arrivals of volatility components across different markets or assets. Furthermore, to constrain the space of parameters further, a specification of the transition probabilities is used that closely follows the structure of the original MMAR framework and does not necessitate estimation of parameters governing the transition probabilites:
γi = 2−(k − i ),
for i = 1, 2…, j , …, k.
πti ∝ f (rt|Mt = mi )
(6)
l f (r1, …, rT ; θ ) =
t =1
⎣
i =1
⎦
∏ ⎢⎢f (rt|Mt = mi )· ∑ P(Mt = mi|r1, …, rt −1)⎥⎥ T
=
∏ f (rt|Mt = mi )·(πt −1A). t =1
t =1
t =1
⎣B
B
⎤
b =1
⎦
(9)
(10)
where Θ is the parameter space, M (θ ) is the vector of differences between sample moments and analytical moments, and W a positive definite weighting matrix. Implementing (10), one typically starts with the identity matrix, then the inverse of the covariance matrix obtained from the first round estimation is used as the weighting matrix in the next step, and the algorithm could either terminate at a pre-fixed round or continue until the estimates and weighting matrices converge. As is well-known, θT is consistent and asymptotically Normal if suitable ‘regularity conditions’ are fulfilled (sets of which are detailed, for example, in Harris and Mátyás (1999)). θT then converges to
t =1
⎤
⎡1
θ∈Θ
T
4k
T
θl = argmin M (θ )′WM (θ )
∏ f (rt|r1, ⋯, rt −1) ⎡
T
∏ f (rt|r1, …, rt −1) ≈ ∏ ⎢⎢ ∑ f (rt|Mt = mt(b))⎥⎥.
In order to extend the range of multifractal models that can be estimated we adopt the Generalized Method of Moments (GMM) approach of Lux (2008) to multivariate settings and apply it not only to discrete but also to continuous distributions of the volatility components. The Generalized Method of Moments (GMM) approach has been developed by Hansen (1982). In the GMM approach, the vector of parameter estimates of the model, θl , can be obtained based on analytical solutions of a set of appropriate moment conditions:
Calvet et al. (2006) were the first to introduce a bivariate multifractal model with Binomial distribution of the multipliers Mq(i, t) . With both series being characterized by the same number k of multipliers, the bivariate model has a Markov switching structure with a total of 4k different states, say mi , i = 1, …, 4k . The likelihood function of such a Markov model is defined in the usual way:
T
(8)
b =1
3.3. Generalized method of moments
3.2. Maximum likelihood estimation
=
B
∑ P(Mt = mi|Mt −1 = m(b)).
Consequently, the approximate likelihood function is obtained by a discrete approximation of the conditional densities by the ‘swarm’ of particles:
Each volatility component is renewed then at time t with probability of γi depending on its rank within the hierarchy of multipliers and remains unchanged with probability 1 − γi . Lux (2008) reports that a model using transition probabilities of the form of Eq. (6) shows very similar performance to an alternative using the specification of Eq. (2) proposed by Calvet and Fisher (2001). Since the additional parameters γ1 and b usually show relatively high sampling variability in estimation, it might be advantageous in applications to fix a “typical” hierarchical structure ex-ante as proposed in Eq. (6). We specify volatility components for all assets along the lines of extant literature to be random draws from either a Binomial distribution or Lognormal distribution. For the Binomial case the distribution of the volatility components is fully specified by the parameter m 0 ∈ (0, 2); for the Lognormal specification, we assume logM ∼ N ( − λ , σm2 ), and normalize it by the constraint E[Mt(i )] = 1.
f (r1, …, rT ; θ ) =
1 B
(7)
T1/2(θT − θ0 ) ∼ N (0, Ξ ),
with rt the bivariate series of asset returns and θ the vector of parameters. The transition matrix A is composed of the conditional probabilities Aij = P(Mt +1 = m j|Mt = mi ) with i , j = {1, 2…4k } and πt is the conditional probability, defined as πti = P(Mt = mi|r1, ⋯, rt ). Maximum likelihood estimation is in principle always possible as long as the distribution of volatility components is discrete. However, the numerous multiplications with the transition matrix A within an optimization step pose computational constraints on this straight forward approach. Since the number of states is more restricted in the present version of the bivariate MSM, the computational complexity of the evaluation of (5) is somewhat reduced and ML estimation becomes feasible for a slightly larger range of k. Moving on to trivariate models, the dimension of the transition matrix would become 8k, and for arbitrary n-variate processes it would amount to (2n)k which could only be handled successfully keeping k small. However, the large degree of heterogeneity of volatility trajectories that can be modelled with a relatively large number of k is exactly one of the attractive features of the multifractal approach. Note also that variants of MSM with a continuous distribution of multipliers could not be estimated via
(11)
with covariance matrix Ξ = in which θ0 is the true parameter vector, VlT = TvarMT (θ ) is the covariance matrix of the lT (θ ) = ∂MT (θ ) is the matrix of first derivatives of moment conditions, F ∂θ the moment conditions, and VT and FT are the constant limiting lT converge. matrices to which VlT and F The applicability of GMM for multifractal models has been discussed by Lux (2008). While GMM would be cumbersome for the firstgeneration MMAR,2 standard sets of regulatory conditions are met with the second-generation MSM. A certain practical problem might, nevertheless, be its arbitrary proximity to a process with long memory for large k. In order to account for this proximity to long memory, Lux (2008) recommends using logarithmic differences of absolute returns together with the pertinent analytical moment conditions, i.e. to transform the observed data rt into τth differences of the log observations:
(FT ′V T−1FT )−1
2 Leovey and Lux (2012) provide a reformulation of the MMAR that translates it into a stationary process and facilitates estimation using GMM.
3
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux j k ⎞ ⎛ = ⎜⎜σ1 + 0.5 ∑ εt(i ) + 0.5 ∑ εt(h) + ln|ut |⎟⎟ i =1 h = j +1 ⎠ ⎝ j k ⎞ ⎛ − ⎜⎜σ1 + 0.5 ∑ εt(−i )τ +0.5 ∑ εt(−h)τ + ln|ut − τ |⎟⎟ ⎝ i =1 h = j +1 ⎠
Table 2 Monte Carlo experiments for SML and GMM (k=8).
Xt , τ = ln|rt| − ln|rt − τ|
j
SML
θl
k
= 0.5 ∑ (εt(i ) − εt(−i )τ ) + 0.5 i =1
∑
(εt(h) − εt(−h)τ ) + (ln|ut | − ln|ut − τ |)
Cov[Xt2+ τ , τ , Xt2, τ ], Cov[Xt + τ , τ , Xt−, τ ],
GMM
Bias
SD
RMSE
Bias
SD
RMSE
m m0
N1 N2 N3
−0.011 −0.010 −0.010
0.017 0.012 0.011
0.022 0.015 0.012
0.031 0.017 −0.004
0.102 0.063 0.025
0.102 0.065 0.030
l σ1
N1 N2 N3
−0.002 −0.001 0.002
0.021 0.014 0.010
0.022 0.014 0.008
−0.013 0.009 0.001
0.027 0.018 0.010
0.028 0.019 0.013
m σ2
N1 N2 N3
−0.003 −0.002 −0.001
0.023 0.012 0.009
0.022 0.013 0.009
−0.007 −0.004 0.002
0.028 0.017 0.010
0.028 0.018 0.011
ρl
N1 N2 N3
0.011 0.012 0.011
0.021 0.012 0.009
0.023 0.016 0.014
0.010 −0.005 0.004
0.059 0.040 0.025
0.062 0.040 0.028
Bias
SD
RMSE
−0.067 −0.024 −0.008
0.102 0.063 0.025
0.107 0.067 0.031
0.028 0.016 −0.010
0.047 0.035 0.029
0.049 0.036 0.032
l σ1
N1 N2 N3
−0.001 −0.001 −0.001
0.028 0.018 0.013
0.028 0.018 0.013
0.017 −0.014 0.003
0.037 0.023 0.012
0.039 0.024 0.014
m σ2
N1 N2 N3
−0.002 −0.002 −0.001
0.029 0.018 0.013
0.029 0.018 0.013
−0.023 0.017 −0.002
0.038 0.028 0.013
0.041 0.028 0.015
ρl
N1 N2 N3
−0.001 −0.003 −0.004
0.061 0.040 0.025
0.066 0.041 0.030
0.033 −0.011 0.009
0.058 0.042 0.027
0.060 0.043 0.029
maximum likelihood and GMM estimators for this specification. Here and in the following tables, we report the biases and standard deviations (SD) of the estimates over 400 Monte Carlo runs together with their root mean squared errors (RMSE). Clearly, ML dominates over GMM as it exploits all the information of the model. However, given the limited number of moment conditions, our GMM estimator also might appear satisfactory for applied purposes, i.e., the bias and RMSE are moderate, in particular with the larger sample size N3=10000. Table 2 presents comparisons of simulation based ML using the particle filter (with number of particles B=10,000) and GMM estimates for the same bivariate model but with larger number of multipliers of k=8 and j=2. Here we do not see much change compared to the efficiency of the GMM estimator in the previous case of k=4 in Table 1, and the additional sampling noise in SML as compared to exact ML reduces the differences between both methods remarkably. Next, we turn to bivariate models with much larger number of multipliers, that the ML approach can not cope with any more within reasonable computation time. Table 3 shows the statistical results of our GMM estimator for the case of k=12 multipliers and j=6: for the m 0 , not only the bias but also the Binomial distribution parameter m finite sample standard deviation and root mean squared error show quite encouraging behavior. Even in the smaller sample sizes N=2000 and N=5000, the average bias is small throughout, and it is practically zero for the larger sample size N=10000. It is also interesting to note
Table 1 Monte Carlo experiments for ML and GMM (k=4).
θl
RMSE
Note: This table shows the comparisons with GMM and SML estimations for the bivariate multifractal model of Eqs. (4) and (5). Simulations are based on these parameters: the number of cascade levels k=8, the joint cascade level k=2; m 0 = 1.4 , ρ = 0.5, σ1 = 1, σ2 = 1. Sample lengths are N1 = 2, 000 , N2 = 5, 000 and N3 = 10, 000 . For each scenario, 400 Monte Carlo simulations have been carried out.
Cov[Xt2+ τ , τ , (Xt−, τ )2].
Xt−, τ stands for the time series other than Xt , τ , We recognize that the transformation in Eq. (12) makes the scale parameters σ drop out of all the so-defined moment conditions. However, the scale parameters can be easily estimated by adding additional moment conditions, i.e., the second moments of empirical data. Similar to Lux (2008), we proceed by conducting Monte Carlo experiments to explore the performance of the GMM estimator for our multivariate models. Analytical moment solutions for both the Binomial and Lognormal models can be found in the Appendix. We start with the bivariate Binomial model with a number of cascade levels k=5, joint levels j=2, correlation parameter ρ = 0.5, scale parameters (unconditional standard deviation) σ1 = σ2 = 1, and Binominal parameter m 0 = 1.4 with sample sizes N1 = 2000 , N2 = 5000 , and N3 = 10000 . We use ten moment conditions: E[Xt ,1·Yt ,1], E[Xt +1,1·Yt ,1], E[Xt +1,1·Xt ,1], E[Yt +1,1·Yt ,1], E[Xt2,1·Yt2,1], E[Xt2+1,1·Yt2,1], E[Xt2+1,1·Xt2,1], E[Yt2+1,1·Yt2,1], E[Xt2] and E[Yt2]. Here, Xt and Yt denote the two single series of differences of log returns. The analytical expressions of all these moment conditions are derived in the Appendix. Table 1 presents the comparisons of the exact
ML
SD
N1 N2 N3
(12)
Cov[Xt + τ , τ , Xt , τ ],
Bias
m m0
h = j +1
with εt(i ) = ln(Mt(i )). The variable Xt , τ in Eq. (12) has nonzero autocovariances over a limited number of time lags. In order to exploit the temporal scaling properties of multifractal processes, we select moment conditions for the covariances of different orders over various time lags τ. More precisely, the moment conditions that we consider include differences in log returns as defined in Eq. (12) as well as their squares, both for the same series at different lags, as well as across pairs of the components of the multivariate series. In particular, we select moment conditions for the powers of Xt , τ i.e. moments of the raw observations and squares of observations:
GMM
1
that our estimates are in harmony with T 2 consistency. Compared to Tables 1 and 2, the quality of the estimates does hardly vary for the same multifractal parameters m0 and different specifications for the number of cascades k and j. Hence, as also observed in Lux (2008), the increased complexity of the model with higher k is practically irrelevant for both the efficiency and computation time of the GMM estimator while k beyond some boundary makes ML/SML effectively unfeasible. As for the variation of m0 in Table 3, we observe that it is easier to identify a higher degree of fractality (higher m0). We then apply the GMM estimator to a MSM model with volatility components being continuously distributed, i.e., −logM ∼ N (λ , σm2 ). Unlike the Binomial model, multifractal processes with continuous distribution of volatility components imply an infinite dimension of the transition matrix, and the exact form of the likelihood function cannot
Note: This table shows the comparisons of ML and GMM estimation for the bivariate multifractal model of Eqs. (4) and (5). Simulations are based on these parameters: the number of cascade levels k=5, the joint cascade levels j=2; m 0 = 1.4 , ρ = 0.5, σ1 = 1, σ2 = 1. Sample lengths are N1 = 2, 000 , N2 = 5, 000 and N3 = 10, 000 . For each scenario, 400 Monte Carlo simulations have been carried out.
4
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
Table 3 Monte Carlo experiments for GMM estimation of the bivariate MF Binomial model.
l σ1
m m0
m σ2
ρl
Bias
SD
RMSE
Bias
SD
RMSE
Bias
SD
RMSE
Bias
SD
RMSE
m 0 = 1.20
N1 N2 N3
−0.095 −0.071 −0.054
0.128 0.122 0.103
0.159 0.141 0.116
0.004 0.002 0.001
0.042 0.028 0.019
0.042 0.028 0.019
0.001 0.000 0.000
0.041 0.027 0.019
0.041 0.027 0.018
0.000 0.002 0.003
0.073 0.047 0.032
0.073 0.047 0.032
m 0 = 1.30
N1 N2 N3
−0.099 −0.045 −0.019
0.144 0.107 0.067
0.175 0.116 0.070
0.006 0.004 0.001
0.063 0.042 0.029
0.063 0.042 0.029
0.000 0.000 0.001
0.061 0.041 0.028
0.061 0.041 0.028
0.007 0.002 0.000
0.084 0.052 0.035
0.084 0.052 0.035
m 0 = 1.40
N1 N2 N3
−0.064 −0.015 −0.004
0.120 0.059 0.033
0.136 0.060 0.034
0.009 0.005 0.001
0.090 0.060 0.042
0.090 0.060 0.042
−0.001 0.000 0.001
0.088 0.058 0.041
0.088 0.058 0.041
0.007 −0.004 −0.008
0.086 0.052 0.035
0.086 0.052 0.036
m 0 = 1.50
N1 N2 N3
−0.041 −0.005 0.001
0.074 0.040 0.024
0.085 0.040 0.024
0.009 0.004 0.001
0.132 0.082 0.058
0.132 0.082 0.058
−0.018 −0.005 0.002
0.117 0.084 0.060
0.118 0.084 0.060
0.005 −0.016 −0.019
0.090 0.054 0.038
0.090 0.057 0.043
Note: Simulations are based on the bivariate multifractal process with parameters k=12, j=6, ρ = 0.5, σ1 = 1, σ2 = 1. The moment conditions detailed in Appendix A are used. Sample lengths are N1 = 2, 000 , N2 = 5, 000 and N3 = 10, 000 . Bias denotes the distance between the given and estimated parameter value, SD and RMSE denote the standard deviation and root mean squared error, respectively. For each scenario, 400 Monte Carlo simulations have been carried out. Table 4 Monte Carlo experiments for the GMM estimation of the bivariate MF Lognormal model.
λl
l σ1
m σ2
ρl
Bias
SD
RMSE
Bias
SD
RMSE
Bias
SD
RMSE
Bias
SD
RMSE
λ = 0.10
N1 N2 N3
−0.029 0.017 −0.009
0.061 0.045 0.033
0.068 0.049 0.035
−0.066 0.049 −0.010
0.352 0.272 0.204
0.358 0.276 0.204
−0.077 0.051 0.011
0.345 0.267 0.204
0.353 0.272 0.204
0.012 −0.007 −0.002
0.069 0.050 0.036
0.070 0.051 0.036
λ = 0.20
N1 N2 N3
−0.041 0.018 −0.009
0.082 0.047 0.033
0.091 0.050 0.034
−0.166 −0.108 −0.061
0.441 0.362 0.314
0.471 0.377 0.319
−0.151 0.087 −0.05
0.477 0.396 0.331
0.499 0.405 0.334
−0.011 0.006 −0.003
0.077 0.041 0.030
0.078 0.041 0.031
λ = 0.30
N1 N2 N3
0.043 0.022 −0.009
0.084 0.048 0.034
0.095 0.052 0.035
−0.191 0.141 −0.086
0.624 0.505 0.452
0.652 0.524 0.46
−0.192 −0.122 −0.077
0.609 0.544 0.451
0.637 0.557 0.457
−0.016 0.011 0.006
0.084 0.042 0.032
0.086 0.043 0.032
λ = 0.40
N1 N2 N3
−0.052 0.025 0.01
0.082 0.046 0.034
0.097 0.053 0.036
0.259 0.177 −0.153
0.651 0.603 0.541
0.701 0.627 0.561
−0.268 0.189 −0.175
0.652 0.533 0.436
0.704 0.565 0.470
0.009 −0.006 −0.003
0.08 0.047 0.031
0.08 0.048 0.030
Note: Simulations are based on the bivariate multi-fractal process with parameters k=12, j=6, ρ = 0.5, σ1 = 1, σ2 = 1. The moment conditions detailed in Appendix B are used. Sample lengths are N1 = 2, 000 , N2 = 5, 000 and N3 = 10, 000 . Bias denotes the distance between the given and estimated parameter value, SD and RMSE denote standard deviation and root mean squared error, respectively. For each scenario, 400 Monte Carlo simulations have been carried out.
lognormal draws σm2 due to their normalization using E[Mt(i )] = 1, implying exp( − λ + 0.5σm2 ) = 1 and σm2 = 2λ . We also note that throughout, the parameters of the Normal innovations, σ1, σ2 and ρ, have more pronounced biases and RMSEs than in the Binomial case. This might reflect a higher variability of the fluctuations in the Lognormal model, but we also note that the chosen values of the parameters m0 and λ are not necessarily comparable. We now move on to a trivariate setting. When applying multivariate multifractal models beyond the bivariate case, the maximum likelihood approach would be computationally feasible only for numbers of cascade levels k < 3, which hardly would allow us to exploit the rich structure of multifractal models. In contrast, GMM provides a more convenient way to implement the estimation for higher dimensional MF models. To do so, we treat each pair of time series as a bivariate case, and select moment conditions of each bivariate time series. We have also conducted Monte Carlo studies for tri-variate MF processes. Analogously to the bivariate models, we select moment conditions for the 3 pairs of bivariate data within the trivariate series. The total number of moment conditions in this case is 21. These are the same moments that we also used in the bivariate case with moments
be used directly. Therefore, the maximum likelihood approach is not applicable to the Lognormal case.3 Instead GMM provides a convenient avenue for estimating multifractal models with continuous state spaces. Moment conditions for the Lognormal model are given in Appendix B. Note that the admissible parameter space for the location parameter λ is λ ∈ [0, 1) where in the borderline case λ = 0 the volatility process collapses to a constant (as with m 0 = 1 in Binomial model). In our Monte Carlo studies of the bivariate Lognormal model reported in Table 4, we cover parameter values of λ = 0.10 –0.40 with an increment of 0.1, and use the same numbers of joint cascade levels and sample sizes as in the Binomial case. As can be seen, results are not too different from those obtained with the Binomial model: biases are moderate and close to zero again; SD and RMSE are moderate and decreasing with increasing sample sizes from 2000 to 10000. Somewhat in contrast to the Binomial case, we notice a slight deterioration of efficiency with smaller sample size when increasing λ, which might be due to the implied increase of the variance of the 3 Simulation based maximum likelihood could, however, be used to numerically approximate the likelihood function.
5
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
Table 5 Monte Carlo experiments for GMM estimation for the trivariate multifractal Binomial model.
Table 7 Empirical GMM estimates of the multivariate multifractal models.
EU /JP /UK
θl
m m0
l σ1
m σ2
Bias
SD
RMSE
N1 N2 N3
0.097 0.042 −0.019
0.128 0.075 0.056
0.161 0.086 0.059
N1 N2 N3
0.011 −0.001 −0.001
0.078 0.055 0.038
0.079 0.055 0.038
N1 N2 N3
0.000 0.000 −0.004
0.084 0.055 0.039
0.084 0.055 0.039
l σ3
N1 N2 N3
0.002 −0.003 0.002
0.086 0.052 0.040
0.086 0.052 0.040
m ρ12
N1 N2 N3
0.011 0.000 −0.009
0.133 0.102 0.085
0.133 0.102 0.085
N1 N2
0.014 0.017
0.124 0.109
0.124 0.110
N3
−0.021
0.098
0.100
N1 N2
−0.006 0.011
0.089 0.073
0.089 0.074
N3
0.009
0.056
0.057
m ρ23
m ρ13
binomial model m m0
l σ3
m ρ12
(0.027)
RMSE
N1 N2
−0.057 0.012
0.051 0.031
0.068 0.033
N3
0.003
0.021
0.021
l σ1
N1 N2 N3
0.056 −0.029 −0.027
0.295 0.210 0.154
0.300 0.211 0.156
m σ2
N1 N2 N3
−0.068 −0.033 −0.008
0.277 0.213 0.158
0.285 0.215 0.158
l σ3
N1 N2 N3
−0.055 −0.034 −0.011
0.283 0.200 0.177
0.288 0.203 0.177
m ρ12
N1 N2 N3
0.014 −0.018 −0.029
0.142 0.101 0.073
0.142 0.102 0.078
N1 N2
0.020 −0.013
0.088 0.056
0.088 0.058
N3
−0.016
0.040
0.043
N1 N2
0.009 0.016
0.048 0.027
0.048 0.031
N3
−0.019
0.021
0.029
λl
m ρ23
m ρ13
Lognormal model λl
0.093
m ρ23
(0.031) 0.455 (0.046) 0.711 (0.022) 0.504 (0.040) 0.638 (0.018) 0.656
m ρ13
(0.019) 0.558
l σ1 m σ2 l σ3
m ρ12
(0.027)
Note: The table presents empirical estimates based on trivariate multifractal models with k=12 and j=4. EU /JP /UK stands for the three foreign exchange rates of U.S Dollar to Euro, Japanese Yen to U.S Dollar and U.S Dollar to British Pound. Column 2 reports GMM estimates of the Binomial model, column 4 reports GMM estimates of the Lognormal model, respectively.
involving two series applied to all three combinations of pairs of members of the trivariate series, e.g. E[Xt2,1·Yt2,1], E[Xt2,1·Zt2,1], E[Yt2,1·Zt2,1] for the three series of differences of log returns Xt, Yt and Zt etc. Table 5 provides the performances of our GMM estimator for the trivariate Binomial multifractal model with k=12, j=4, and parameters m 0 = 1.3, σ1 = 1, σ2 = 1, σ3 = 1, ρ12 = 0.3, ρ23 = 0.5, ρ13 = 0.7. Table 6 reports the results of Monte Carlo studies for the trivariate Lognormal model with the same design (except for λ = 0.2 ) as in Table 5. We observe biases and RMSEs that are very close to those obtained with the same parameters m0 and λ in the previous bivariate case. The passage from bivariate to trivariate time series, thus, appears to be almost neutral in terms of the efficiency of the GMM estimation. We have also repeated experiments with different parameter values and different numbers of joint multipliers j, which provides similar results for both the Binomial and Lognormal cases. All in all, the performance from both the Binomial and Lognormal Monte Carlo simulations and estimation exercise appears very promising both in the case of discrete and continuous state spaces.
Table 6 GMM estimation for the trivariate multifractal (Lognormal) model. SD
m ρ13
(0.018) 0.557
m σ2
Sample lengths are N1 = 2, 000 , N2 = 5, 000 and N3 = 10, 000 .
Bias
m ρ23
(0.061) 0.459 (0.045) 0.715 (0.022) 0.507 (0.041) 0.639 (0.018) 0.657
l σ1
Note: Simulations are based on the trivariate Binomial multifractal process with parameters k=12, j=4, m 0 = 1.3, σ1 = 1, σ2 = 1, σ3 = 1, ρ12 = 0.3, ρ23 = 0.5, ρ13 = 0.7.
θl
1.402
EU /JP /UK
4. An empirical application Motivated by the encouraging performance of our GMM estimator for the multivariate multifractal models, we now turn to an empirical application. Table 7 presents empirical estimates of the model for the trivariate series consisting of daily data of three foreign exchange rates; the U.S Dollar to Euro, Japanese Yen to U.S Dollar and U.S Dollar to British Pound (EU/JP/UK, 4th January 1999–31st December 2015), where the first symbol inside these parentheses designates the short notation for the corresponding time series, followed by the starting and ending dates for the sample at hand. Returns are computed as log price differences rt = 100 × [ln(pt ) − ln(pt−1)], with pt denoting exchange rates observations. For this exercise, we have relatively arbitrarily set k=12 and j=4. As can be seen, results for the parameters of the Gaussian innovations are very close to each other under the Binomial and Lognormal MSM. This confirms the finding of previous papers that different specifications of the MSM model tend to perform similarly in their coverage of the volatility dynamics. In Table 8 we compare out-of-sample mean squared errors and mean absolute errors of the estimated trivariate MSM model to those of a DCC-GARCH model. For this exercise, we have split the data into an in-sample period for parameter estimation (until 12/31/2003) and have used the rest of the time series for the out-of-sample comparison
Note: Simulations are based on the trivariate Lognormal multifractal process with parameters k=12, j=4, λ = 0.2 , σ1 = 1, σ2 = 1, σ3 = 1, ρ12 = 0.3, ρ23 = 0.5, ρ13 = 0.7. Sample lengths are N1 = 2, 000 , N2 = 5, 000 and N3 = 10, 000 .
6
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
Table 8 Comparison of volatility forecasts.
Horizons 1 2 10 20 50 100
DCC GARCH
Binomial model
Lognormal model
Binomial vs. DCC-GARCH
Lognormal vs. DCC-GARCH
Euro
Euro
Euro
p(DM)
λ
std. error
p(DM)
λ
std. error
0.912 0.915 0.928 0.937 0.975 1.021
0.924 0.916 0.920 0.926 0.966 0.997
0.926 0.916 0.920 0.927 0.966 0.996
0.785 0.571 0.425 0.298 0.216 0.122
−1.307 −1.779 −0.902 0.979 0.599 0.775
0.642 1.429 0.217 0.189 0.206 0.135
0.802 0.596 0.426 0.294 0.206 0.125
−1.343 −1.762 −0.906 0.967 0.612 0.767
0.612 1.421 0.196 0.194 0.215 0.144
1.037 1.041 1.045 1.050 1.072 1.091
1.049 1.034 1.035 1.038 1.053 1.064
1.049 1.033 1.033 1.036 1.051 1.061
0.633 0.414 0.214 0.210 0.137 0.121
Yen
Yen
Yen
p(DM)
λ
std. error
p(DM)
λ
std. error
0.947 0.959 0.970 0.987 0.995 1.006
0.957 0.963 0.960 0.983 0.997 1.005
0.957 0.962 0.960 0.982 0.995 1.003
0.784 0.558 0.229 0.450 0.558 0.464
−1.542 −0.786 1.339 1.511 0.612 −1.780
0.313 0.386 0.246 0.394 0.267 0.789
0.783 0.573 0.237 0.47 0.566 0.368
−1.553 −0.772 1.343 1.527 0.617 −1.769
0.309 0.370 0.249 0.390 0.259 0.770
0.988 0.994 1.005 1.017 1.024 1.035
1.039 1.053 1.063 1.078 1.102 1.114
1.039 1.052 1.061 1.076 1.098 1.110
0.814 0.902 0.904 0.878 0.881 0.911
Pound
Pound
Pound
p(DM)
λ
std. error
p(DM)
λ
std. error
0.851 0.854 0.889 0.905 0.945 0.987
0.872 0.854 0.882 0.894 0.936 0.978
0.874 0.855 0.883 0.896 0.938 0.979
0.752 0.502 0.358 0.267 0.223 0.214
−0.123 −2.021 0.529 0.666 0.380 0.088
0.136 2.905 0.287 0.225 0.256 0.258
0.765 0.503 0.345 0.286 0.208 0.217
−0.12 −2.028 0.521 0.659 0.358 0.092
0.117 2.897 0.290 0.231 0.246 0.254
1.017 1.020 1.021 1.022 1.025 1.031
1.035 1.008 1.005 1.001 1.000 1.015
1.034 1.005 1.002 0.998 0.998 1.012
0.616 0.314 0.186 0.171 0.105 0.144
MSE
MAE 1 2 10 20 50 100
Horizons 1 2 10 20 50 100
0.627 0.410 0.212 0.225 0.138 0.111
MSE
MAE 1 2 10 20 50 100
Horizons 1 2 10 20 50 100
0.819 0.913 0.908 0.875 0.899 0.900
MSE
MAE 1 2 10 20 50 100
0.632 0.306 0.220 0.163 0.129 0.152
Note: The table exhibits mean squared errors (MSE) and mean absolute errors (MAE) of out-of-sample volatility forecasts for a trivariate DCC-GARCH and trivariate MSM models. MSE and MAE as reported in the table have been standardized by dividing by the MSE and MAE of a naive forecast using historical volatility (so that values<1 indicate an improvement against historical volatility), p(DM) denotes the probability of the Diebold-Mariano test, while λ is the slope estimate of the forecast encompassing regression, Eq. (13), followed by its standard error in the subsequent column.
DCC-GARCH forecasts. Note that λ would also be the weight of the MSM model in an optimal combination of DCC-GARCH and Binomial (Lognormal) MSM. As we can see, these weights would mostly be above 0.5 for longer lags and occasionally get close to unity. While optimal forecasts would favor DCC-GARCH for short horizons, they would shift considerable weight towards the MSM models for longer horizon forecasts. Since MSM does not have any particular parameter to match short-run volatility correlations, but genuinely captures long memory of volatility, these mixed findings appear entirely plausible. The multivariate MSM, thus, appears as a promising candidate for improvement of volatility forecasts and assessment of market risk. The simple and easy to compute GMM estimator proposed in this paper should facilitate its use for said purposes.
of volatility forecasts. One observes that the parsimonious MSM provides better forecasts for horizons from about 10 days onward. Hence, this relatively simple model can indeed improve upon a baseline GARCH specification. Differences are, however, too small to be significant under the Diebold-Mariano test as indicated by the pertinent probabilities of the null hypothesis that DCC-GARCH and Binomial (Lognormal) MSM have the same forecast accuracy against the alternative of better forecast accuracy of the pertinent MSM model, cf. the column labelled p(DM) in Table 8. On the other hand, conducting forecast encompassing tests using the regression (cf. Harvey et al., 1998)
e1, t = λ(e1, t − e2, t ) + ϵt
(13)
with e1, t and e2, t the errors of the forecasts from DCC-GARCH and Binomial (Lognormal) MSM, we find estimates of λ significantly different from zero for higher lags indicating that DCC-GARCH does not encompass the forecasts from the MSM models at these forecast horizons. Hence, the MSM models do add significant value on top of
5. Concluding remarks In the present paper, we have proposed a parsimonious multivariate multifractal model in which we have caputured volatility 7
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
restrictions on the choice of the number of cascade levels and GMM also applies to multifractal models with continuously distributed volatility components. An empirical application demonstrates that such a parsimonious MSM specification can improve upon a standard DCC-GARCH in forecasting volatility over medium-term to long-term forecast horizons. There are many directions for further research, including identification tests for the optimal number of multipliers and joint cascade levels; which should be promising for further empirical applications such as volatility forecasts and portfolio management.
correlations across assets of a portfolio via a subset of joint volatility components at the low-frequency end of the spectrum of hierarchical levels. Since there are restrictions on maximum likelihood (ML) estimation in such a setting due to its high dimensional transition matrix with large state spaces, we have developed an alternative Generalized Method Moment (GMM) approach. The moment conditions have been based on the log transformation of absolute returns. Our Monte Carlo experiments indicate that the performance of our GMM estimator does not deteriorate when moving from univariate to bivariate or trivariate models with the above type of volatility correlations. In addition, the GMM approach does not pose computational
Appendix: Moment conditions Recall the model from Section 3. Let εt(·) = ln(|Mt(·)|). We first compute the first log difference of returns:
Xt ,1 = ln(|r1, t|) − ln(|r1, t −1|) =
Yt ,1 = ln(|r2, t|) − ln(|r2, t −1|) =
j
1 2
i) ∑ (εt(i) − εt(−1 )+
1 2
i) )+ ∑ (εt(i) − εt(−1
i =1 j
i =1
1 2 1 2
k
∑
l) (εt(l ) − εt(−1 ) + (ln|u1, t | − ln|u1, t −1|),
l = j +1 k
∑
h) (εt(h) − εt(−1 ) + (ln|u2, t | − ln|u2, t −1|)
h = j +1
Appendix A. Binomial case Sections A.1–A.6 provide the details of the six non-trivial types of moment conditions used in our GMM estimator. A.1 The covariance between two series with joint volatility components is defined as:
⎧ ⎡ ⎪ 1 cov(Xt ,1, Yt ,1) = E[(Xt ,1 − E[Xt ,1])·(Yt ,1 − E[Yt ,1])] = E[Xt ,1·Yt ,1] = E ⎨⎢ ⎪⎢ 2 ⎩⎣ ⎡ ⎢1 ⎢⎣ 2
j i) )+ ∑ (εt(i) − εt(−1 i =1
1 2
j i) ∑ (εt(i) − εt(−1 )+ i =1
1 2
⎤ l) (εt(l ) − εt(−1 ) + (ln|u1, t | − ln|u1, t −1|)⎥ · ⎥⎦ l = j +1 k
∑
⎫ ⎡ j ⎤⎪ ⎞2 ⎤ 1 ⎛ h) i) ⎟ ⎥ (εt(h) − εt(−1 ) + (ln|u2, t | − ln|u2, t −1|)⎥⎬ = E ⎢⎜⎜∑ (εt(i ) − εt(−1 )⎟ − 2E[ut ]2 + 2E[ln|u1, t |·ln|u2, t |]. ⎥⎦⎪ 4 ⎢⎣⎝ i =1 ⎠ ⎥⎦ h = j +1 ⎭ k
∑
(A1)
i) 2 ) ], its only one non-zero contribution is [ln(m 0 ) − ln(2 − m 0 )]2 , and it occurs when new draws take place in cascade We firstly consider E[(εt(i ) − εt(−1 1 1 level i between t and t − 1, whose probability by definition is 2 k − i . Summing up we get:
2
j
cov(Xt ,1, Yt ,1) = 0.25·[ln(m 0 ) − ln(2 − m 0 )]2 · ∑ i =1
1 1 − 2E[ut ]2 + 2E[ln|u1, t |·ln|u2, t |]. 2 2k − i
A.2 Next, we consider:
⎧ ⎡ ⎪ 1 cov(Xt +1,1, Yt ,1) = E ⎨⎢ ⎪⎢ 2 ⎩⎣
j i) ∑ (εt(+1 − εt(i )) + i =1
1 2
⎤⎡ 1 l) (εt(+1 − εt(l )) + (ln|u1, t +1| − ln|u1, t |)⎥ ·⎢ ⎥ ⎢ l = j +1 ⎦ ⎣2 k
∑
j j ⎤ 1 ⎡ i) i) ⎥ = E ⎢∑ (εt(+1 − εt(i ))· ∑ (εt(i ) − εt(−1 ) + E[ut ]2 − E[ln|u1, t |·ln|u2, t |]. ⎥⎦ 4 ⎢⎣ i =1 i =1
For
i) (εt(+1
εt(i ))(εt(i )
i) εt(−1 ),
j i) ∑ (εt(i) − εt(−1 )+ i =1
1 2
⎤⎫ ⎪ h) (εt(h) − εt(−1 ) + (ln|u2, t | − ln|u2, t −1|)⎥⎬ ⎪ ⎥ h = j +1 ⎦⎭ k
∑
(A2)
the non-zero contributions only occur in the case of two changes of the multiplier from time t + 1 to time t − 1, the ⎛ 1 1 ⎞2 probability of this occurrence is ⎜ 2 k − i ⎟ . So, we have the result: ⎝ 2 ⎠ j ⎛ 1 1 ⎞2 cov[Xt +1,1, Yt ,1] = 0.25·[2ln(m 0 )·ln(2 − m 0 ) − (ln(m 0 ))2 − (ln(2 − m 0 ))2 ]· ∑ ⎜ k − i ⎟ + E[ut ]2 − E[ln|u1, t |·ln|u2, t |]. ⎝2 2 ⎠ i =1
−
−
8
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
A.3 Then, we look at the moment condition for one single time series:
⎧ ⎡ ⎪ 1 cov[Xt +1,1, Xt ,1] = E ⎨⎢ ⎪⎢ 2 ⎩⎣
j
i =1
j
∑
i) ∑ (εt(i) − εt(−1 )+ i =1
⎡ ⎤ ⎤ 1 ⎡⎢ 1 i) i) ⎥ l) l) ⎥ E ∑ (εt(+1 − εt(i ))· ∑ (εt(i ) − εt(−1 ) + E ⎢ ∑ (εt(+1 − εt(l ))· ∑ (εt(l ) − εt(−1 ) ⎢ ⎥⎦ ⎥⎦ 4 ⎢⎣ i =1 4 ⎣ l = j +1 i =1 l = j +1 j
=
⎤⎡ 1 l) (εt(+1 − εt(l )) + (ln|u1, t +1| − ln|u1, t |)⎥ ·⎢ ⎥⎦ ⎢⎣ 2 l = j +1 k
1 2
i) ∑ (εt(+1 − εt(i )) +
j
k
1 2
⎫ ⎤⎪ l) (εt(l ) − εt(−1 ) + (ln|u1, t | − ln|u1, t −1|)⎥⎬ ⎪ ⎥⎦ l = j +1 ⎭ k
∑
k
+ E[ln|ut |]2 − E[ln|ut |2 ]. (A3)
The first component is identical to the one of the case of cov[Xt +1,1, Yt ,1], and the second component can be derived in the same way. Adding together we arrive at: j ⎛ 1 1 ⎞2 cov[Xt +1,1, Xt ,1] = 0.25·[2ln(m 0 )·ln(2 − m 0 ) − (ln(m 0 ))2 − (ln(2 − m 0 ))2 ]· ∑ ⎜ k − i ⎟ ⎠ ⎝ i =1 2 2
⎛ 1 1 ⎞2 ⎜ k−i ⎟ ⎠ ⎝ i = j +1 2 2
+ 0.25·[2ln(m 0 )·ln(2 − m 0 ) − (ln(m 0 ))2 − (ln(2 − m 0 ))2 ]·
k
∑
+ E[ln|ut |]2 − E[ln|ut |2 ]. (A4)
By our assumption of both time series having the same number of cascade levels, the moment expressions for the two individual time series are identical for the same length of time lags. A.4 Then, we turn to the squared observations:
⎧⎡ ⎪ 1 E[Xt2,1·Yt2,1] = E ⎨⎢ ⎪⎢⎣ 2 ⎩
j i) )+ ∑ (εt(i) − εt(−1 i =1
1 2
⎤2 l) (εt(l ) − εt(−1 ) + (ln|u1, t | − ln|u1, t −1|)⎥ · ⎥⎦ l = j +1
⎡ ⎢1 ⎢⎣ 2
k
∑
i =1
⎞2 ⎤
⎡⎛ k ⎡ j k ⎞4⎤ 1 ⎢⎛ 1 ⎢⎜ i) ⎟ ⎥ l) ⎟ ⎜ h) ⎟ ⎥ )⎟ + )⎟ ⎜ ∑ (εt(h) − εt(−1 )⎟ ⎥ E ⎜⎜∑ (εt(i ) − εt(−1 E ⎢⎜ ∑ (εt(l ) − εt(−1 16 ⎢⎣⎝ i =1 ⎠ ⎥⎦ 16 ⎣⎝ l = j +1 ⎠⎦ ⎠ ⎝ h = j +1 ⎞2 ⎛
=
j i) )+ ∑ (εt(i) − εt(−1
+
⎡ j ⎞2 ⎤ ⎞2 ⎛ k 1 ⎢⎛⎜ i) ⎟ ⎜ l) ⎟ ⎥ E ⎢⎜∑ (εt(i ) − εt(−1 )⎟ ⎜ ∑ (εt(l ) − εt(−1 )⎟ ⎥ 16 ⎝ i =1 ⎠ ⎝ l = j +1 ⎠⎦ ⎣
+
+
1 2
⎫ ⎤2 ⎪ h) (εt(h) − εt(−1 ) + (ln|u2, t | − ln|u2, t −1|)⎥ ⎬ ⎥⎦ ⎪ h = j +1 ⎭ k
∑
⎡ j ⎞2 ⎤ ⎞2 ⎛ k 1 ⎢⎛⎜ i) ⎟ ⎜ h) ⎟ ⎥ )⎟ ⎜ ∑ (εt(h) − εt(−1 )⎟ ⎥ E ⎢⎜∑ (εt(i ) − εt(−1 16 ⎝ i =1 ⎠ ⎝ h = j +1 ⎠⎦ ⎣
⎧ ⎡ j ⎡⎛ k ⎞ 2 ⎤⎫ ⎞2 ⎤ ⎪ 1 ⎪ ⎢⎜⎛ ⎢ i) ⎟ ⎥ l) ⎟ ⎥ ⎨2E ⎜∑ (εt(i ) − εt(−1 )⎟ + 2E ⎢⎜⎜ ∑ (εt(l ) − εt(−1 )⎟ ⎥⎬·(2E[ln|ut |2 ] − 2E[ln|ut |]2 ) ⎢ ⎥ 4 ⎪ ⎣⎝ i =1 ⎠⎦ ⎠ ⎦⎪ ⎣⎝ l = j +1 ⎩ ⎭
+ 2E[(ln|u1, t |)2 ·(ln|u2, t |)2 ] − 8E[(ln|u1, t |)2 ·ln|u2, t |]·E[ln|ut |]
+ 4E[ln|u1, t |·(ln|u2, t |]2 + 2E[(ln|ut |)2]2 .
By examining each component in the expression above and using the calculations of the previous moments, it is not difficult to find the solution: k ⎞ ⎛ ⎜ 1 1 ∑ 1 1 ⎟ + 2[ln(m 0 ) − ln(2 − m 0 )]4 ⎟ ⎜ k h k l − − i =1 2 h = j +1 ⎝ 2 2 l = j +1 2 2 ⎠ k ⎞ ⎛ j ⎞ 1 1 ⎟ 1 1 1 1 ⎟ 2 2 + (E[ln|ut |2 ] − E[ln|ut |]2 )·[ln(m 0 ) − ln(2 − m 0 )]2 ·⎜⎜∑ + ∑ ⎟ + 2E[(ln|u1, t |) ·(ln|u2, t |) ] ⎟ k h k − i − − k i 22 ⎠ i = j +1 2 2 ⎠ ⎝ i =1 2 2
E[Xt2,1·Yt2,1] = [ln(m 0 ) − ln(2 − m 0 )]4 ·
·
1 16
−
j
⎛
∑ ⎜⎜ 1
1 j−i i =1 ⎝ 2 2
k
∑ h = j +1
1 16
j
∑1
8E[(ln|u1, t |)2 ·ln|u2, t |]·E[ln|ut |]
1
2k − i
+ [ln(m 0 ) − ln(2 − m 0 )]4 ·
1 16
k
∑
+ 4E[ln|u1, t |·ln|u2, t |]2 + 2E[(ln|ut |)2 ]2 .
(A5)
A.5
⎧⎡ ⎪ 1 E[Xt2+1,1·Yt2,1] = E ⎨⎢ ⎪⎢⎣ 2 ⎩
j i) − εt(i )) + ∑ (εt(+1 i =1
1 2
⎤2 ⎡ (l ) (l ) ⎥ ·⎢ 1 ( ε − ε ) + ( ln | u | − ln | u |) ∑ t +1 t 1, t +1 1, t ⎥⎦ ⎢⎣ 2 l = j +1 k
⎡ j ⎞2 ⎤ ⎞2 ⎛ j 1 ⎢⎛ i) i) ⎟ ⎥ = E ⎜⎜∑ (εt(+1 − εt(i ))⎟⎟ ⎜⎜∑ (εt(i ) − εt(−1 )⎟ 16 ⎢⎣⎝ i =1 ⎠ ⎥⎦ ⎠ ⎝ i =1
j i) )+ ∑ (εt(i) − εt(−1 i =1
⎫ ⎤2 ⎪ (h ) (h ) ⎥⎬ ( ε − ε ) + ( ln | u | − ln | u |) ∑ t 2, t 2, t −1 t −1 ⎥⎦ ⎪ h = j +1 ⎭ k
⎡⎛ k ⎞2 ⎤ ⎞2 ⎛ k 1 ⎢⎜ l) h) ⎟ ⎥ + E ⎢⎜ ∑ (εt(+1 − εt(l ))⎟⎟ ⎜⎜ ∑ (εt(h) − εt(−1 )⎟ ⎥ 16 ⎝ l = j +1 ⎠⎦ ⎠ ⎝ h = j +1 ⎣ ⎡ j ⎞2 ⎤ ⎞2 ⎛ k 1 ⎢⎛⎜ ⎥ i) ⎟ ⎜ l) E ⎢⎜∑ (εt(i ) − εt(−1 )⎟ ⎜ ∑ (εt(+1 − εt(l ))⎟⎟ ⎥ 16 ⎝ i =1 ⎠ ⎠⎦ ⎝ l = j +1 ⎣
+
⎡ j ⎞2 ⎤ ⎞2 ⎛ k 1 ⎢⎛⎜ i) h) ⎟ ⎥ E ⎢⎜∑ (εt(+1 − εt(i ))⎟⎟ ⎜⎜ ∑ (εt(h) − εt(−1 )⎟ ⎥ 16 ⎝ i =1 ⎠ ⎝ h = j +1 ⎠⎦ ⎣
+
⎧ ⎡ j ⎡⎛ k ⎞ 2 ⎤⎫ ⎞2 ⎤ ⎪ 1 ⎪ ⎢⎛ i) ⎟ ⎥ l) ⎟ ⎥ ⎨2E ⎜⎜∑ (εt(i ) − εt(−1 )⎟ + 2E ⎢⎢⎜⎜ ∑ (εt(l ) − εt(−1 )⎟ ⎥⎬·(2E[ln|ut |2 ] − 2E[ln|ut |]2 ) ⎥ 4 ⎪ ⎢⎣⎝ i =1 ⎠ ⎠ ⎝ ⎦ ⎦⎪ ⎣ l = j +1 ⎩ ⎭
·E[ln|ut |] + 4E[ln|u1, t |·ln|u2, t |]E[ln|ut |]2
1 2
+
+ 3E[ln|ut |2 ]2 − 4E[ln|ut |2 ]E[ln|ut |]2 .
9
+ E[(ln|u1, t |)2 ·(ln|u2, t |)2 ] − 4E[(ln|u1, t |)2 ·ln|u2, t |]
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux 2
j
2
j
i) i) By now, the only unfamiliar component is the first term: E[(∑i =1 (εt(+1 − εt(i ))) ·(∑i =1 (εt(i ) − εt(−1 )) ], there are three different forms to be considered:
⎛ 1 1 ⎞2 2 i) i) 2 i) i) − εt(i )) (εt(i ) − εt(−1 ) , which have non-zero value only if εt(+1 . and its probability is ⎜ 2 k − i ⎟ , combining with the non-zero (1) (εt(+1 ≠ εt(i ) ≠ εt(−1 ⎝ 2 ⎠ ⎛ j 1 1 2⎞ expectation value, we have ⎜∑i =1 ( 2 k − i ) ⎟ ·[ln(m 0 ) − ln(2 − m 0 )]4 . 2 ⎠ ⎝ j ⎛ 1 j 1 ⎞ j) (j ) i) (i ) 2 (j ) 2 ( i ) ≠ εt(j ) and εt(i ) ≠ εt(−1 , the probability of its occurrence is ∑i =1 ⎜ k − i ∑n =1, n ≠ i k − n ⎟. Putting (2) (εt +1 − εt ) (εt − εt −1) , which are non-zero for i ≠ j , εt(+1 2 ⎝2 ⎠ together these two possible forms we get ⎛ j ⎛1 1 [ln(m 0 ) − ln(2 − m 0 )]4 ·⎜∑ ⎜ k − i ⎝ i =1 ⎝ 2 2
j
∑n =1
1 1 ⎞⎞ ⎟⎟ . 2 2k − n ⎠⎠
j) j) i) i) n) n) − εt(j ))(εt(+1 − εt(i ))(εt(j ) − εt(−1 )(εt(i ) − εt(−1 ), which for i ≠ j and εt(+1 ≠ εt(n) ≠ εt(−1 (3) Form (εt(+1 , n = i , j are non-zero, and which implies 2 ⎧ ⎛ 1 ⎞ 2 ⎞⎫ ⎪ ⎪ j ⎛⎛ 1 ⎞ j ⎜⎜⎜ 2⎨ ⎟ ⎟⎬·[ln(m 0 ) − ln(2 − m 0 )]4 . ⎜ ⎟ ∑ ∑ ⎪ i =1 ⎝ 2k − i ⎠ n =1, n ≠ i ⎝ 2k − n ⎠ ⎟⎪ ⎠⎭ ⎝ ⎩
Then we have the solution for the first component in the above moment condition:
⎡⎛ j ⎡ j ⎛ ⎞2 ⎤ ⎞2 ⎛ j 1 1 i) i) ⎟ ⎥ E ⎢⎜⎜∑ (εt(+1 − εt(i ))⎟⎟ ·⎜⎜∑ (εt(i ) − εt(−1 )⎟ = [ln(m 0 ) − ln(2 − m 0 )]4 ⎢∑ ⎜⎜ k − i ⎢⎝ ⎢ ⎝2 2 ⎠ ⎥⎦ ⎠ ⎝ i =1 ⎣ i =1 ⎣ i =1
j
∑ n =1
j ⎛ ⎞2 ⎛ 1 1 ⎞⎟ ⎜ 1 1 ⎟ + 2 ∑ ⎜⎜⎝ k − i ⎟⎠ 2 2k − n ⎠ 2 2 i =1 ⎝
⎛ 1 1 ⎞2⎞⎤⎥ ⎜ k − n ⎟ ⎟⎟ ⎝ 2 2 ⎠ ⎠⎥⎦ n =1, n ≠ i j
∑
The other components can be solved by recalling previous calculations. All in all, we finally arrive at:
E[Xt2+1,1·Yt2,1] = [ln(m 0 ) − ln(2 − m 0 )]4 ·
·
1 16
⎛ ⎜1 1 ⎜ k−l l = j +1 ⎝ 2 2 k
k
∑
∑ h = j +1
⎡ j 1 ⎢ ⎛⎜ 1 1 ∑⎜ 16 ⎢⎣ i =1 ⎝ 2 2k − i
⎞ 1 1 ⎟ k h − 2 2 ⎟⎠
+
j
∑ n =1
j ⎛ ⎞2 ⎛ 1 1 ⎞⎟ ⎜ 1 1 ⎟ + 2 ∑ ⎜⎜⎝ k − i ⎟⎠ k n − 22 ⎠ i =1 ⎝ 2 2
1 [ln(m 0 ) − ln(2 − m 0 )]4 8
⎛
j
∑ ⎜⎜ 1 i =1
⎛ 1 1 + (E[ln|ut |2 ] − E[ln|ut |]2 )·[ln(m 0 ) − ln(2 − m 0 )]2 ·⎜⎜∑ + k−i ⎝ i =1 2 2 j
+ 4E[ln|u1, t |·ln|u2, t |]E[ln|ut |]2
⎝2
k
∑ i = j +1
⎛ 1 1 ⎞2 ⎞⎤⎥ ⎜ k − n ⎟ ⎟⎟ ⎠ ⎠⎥⎦ ⎝ n =1, n ≠ i 2 2 j
∑
k
1 2k − i
∑ l = j +1
⎞ 1 1 ⎟ − k i 2 2 ⎟⎠
+ [ln(m 0 ) − ln(2 − m 0 )]4
⎞ 1 1 ⎟ k l − 2 2 ⎟⎠ + E[(ln|u1, t |)2 ·(ln|u2, t |)2 ] − 4E[(ln|u1, t |)2 ·ln|u2, t |]·E[ln|ut |]
+ 3E[ln|ut |2 ]2 − 4E[ln|ut |2 ]E[ln|ut |]2 .
(A6)
A.6
⎧⎡ ⎪ 1 E[Xt2+1,1·Xt2,1] = E ⎨⎢ ⎪⎢⎣ 2 ⎩ =
j i) − εt(i )) + ∑ (εt(+1 i =1
1 2
⎤2 l) (εt(+1 − εt(l )) + (ln|u1, t +1| − ln|u1, t |)⎥ · ⎥⎦ l = j +1 k
∑
⎡ ⎢1 ⎢⎣ 2
j i) )+ ∑ (εt(i) − εt(−1 i =1
1 2
⎫ ⎤2 ⎪ l) (εt(l ) − εt(−1 ) + (ln|u1, t | − ln|u1, t −1|)⎥ ⎬ ⎥⎦ ⎪ l = j +1 ⎭ k
∑
⎡⎛ k ⎡ j ⎞2 ⎤ ⎞2 ⎛ k ⎞2 ⎤ ⎞2 ⎛ j 1 ⎢⎛ 1 ⎢⎜ i) i) ⎟ ⎥ i) i) ⎟ ⎥ − εt(i ))⎟⎟ ·⎜⎜∑ (εt(i ) − εt(−1 )⎟ + − εt(i ))⎟⎟ ·⎜⎜ ∑ (εt(i ) − εt(−1 )⎟ ⎥ E ⎜⎜∑ (εt(+1 E ⎢⎜ ∑ (εt(+1 16 ⎢⎣⎝ i =1 ⎠ ⎥⎦ 16 ⎣⎝ l = j +1 ⎠ ⎝ i =1 ⎠⎦ ⎠ ⎝ l = j +1 +
⎡ j ⎡ j ⎞2 ⎤ ⎞2 ⎤ ⎞2 ⎛ k ⎞2 ⎛ k 1 ⎢⎛⎜ 1 ⎢⎛⎜ ⎥ i) l) ⎟ ⎥ i) ⎟ ⎜ l) − εt(i ))⎟⎟ ⎜⎜ ∑ (εt(l ) − εt(−1 )⎟ ⎥ + )⎟ ⎜ ∑ (εt(+1 − εt(l ))⎟⎟ ⎥ E ⎢⎜∑ (εt(+1 E ⎢⎜∑ (εt(i ) − εt(−1 16 ⎝ i =1 16 ⎠ ⎝ ⎠ ⎠ ⎝ ⎠ ⎝ l = j +1 l = j +1 ⎦ ⎦ ⎣ i =1 ⎣
⎧ ⎡ j ⎡⎛ k ⎞ 2 ⎤⎫ ⎞2 ⎤ ⎪ 1 ⎪ ⎢⎛ i) l) ⎨2E ⎜⎜∑ (εt(+1 − εt(i ))⎟⎟ ⎥ + 2E ⎢⎢⎜⎜ ∑ (εt(+1 − εt(l ))⎟⎟ ⎥⎥⎬·(2E[ln|ut |2 ] − 2E[ln|ut |]2 ) ⎥ 4 ⎪ ⎢⎣⎝ i =1 ⎠⎦ ⎠ ⎦⎪ ⎣⎝ l = j +1 ⎩ ⎭ k j j j ⎡ k ⎤ ⎡ j ⎤ ⎡ ⎤ 1 1 l) l) ⎥ i) i) ⎥ i) i) ⎥ ⎢ + ·4E ⎢∑ (εt(+1 − εt(i )) ∑ (εt(i ) − εt(−1 ) E ∑ (εt(+1 − εt(l )) ∑ (εt(l ) − εt(−1 ) + ·4E ⎢∑ (εt(+1 − εt(i )) ∑ (εt(i ) − εt(−1 ) · ⎥⎦ ⎥⎦ ⎢⎣ i =1 ⎥⎦ ⎢⎣ l = j +1 4 16 ⎣⎢ i =1 l = j +1 i =1 i =1
+
(E[ln|ut |]2 − E[ln|ut |2 ])
+
k ⎡ j ⎤ 1 l) l) ⎥ ·4E ⎢ ∑ (εt(+1 − εt(l )) ∑ (εt(l ) − εt(−1 ) ·(E[ln|ut |]2 − E[ln|ut |2 ]) ⎢⎣ l = j +1 ⎥⎦ 4 l = j +1
+ 3E[ln|ut |2 ]2 + E[ln|ut |4 ] − 4E[ln|ut |3] (A7)
E[ln|ut |].
10
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
The first and second term are the same as the first one in the case E[Xt2+1,1, Yt2,1], and the rest are familiar. Adding together, we have the result:
E[Xt2+1,1·Xt2,1] = [ln(m 0 ) − ln(2 − m 0 )]4 ·
·
⎡ k ⎛ 1 ⎢ ∑ ⎜1 1 16 ⎢⎣ l = j +1 ⎜⎝ 2 2k − l
k
∑ h = j +1
⎡ j 1 ⎢ ⎛⎜ 1 1 ∑⎜ 16 ⎢⎣ i =1 ⎝ 2 2k − i
j
∑ n =1
j ⎛ 1 1 ⎞⎟ ⎜( 1 1 ) 2 + 2 ∑ ⎟ ⎜ k−i 2 2k − n ⎠ i =1 ⎝ 2 2
k ⎛ ⎞ 1 1 ⎟ ⎜( 1 1 ) 2 + 2 ∑ ⎜ k−h 2 2k − h ⎟⎠ h = j +1 ⎝ 2 2
k
∑ l = j +1, l ≠ i
(
j
∑
(
n =1, n ≠ i
⎞⎤ 1 1 2 ⎟⎥ ) 2 2k − l ⎟⎠⎥⎦
⎞⎤ 1 1 2 ⎟⎥ ) 2 2k − n ⎟⎠⎥⎦
+ [ln(m 0 ) − ln(2 − m 0 )]4
+ [ln(m 0 ) − ln(2 − m 0 )]4 ·
1 8
j
⎛
∑ ⎜⎜ 1 i =1
1
k−i ⎝2 2
k
∑ l = j +1
⎞ 1 1 ⎟ 2 2k − l ⎟⎠
k ⎞ ⎛ j 1 1 1 1 ⎟ 2 2 2 + (E[ln|ut |2 ] − E[ln|ut |]2 )·[ln(m 0 ) − ln(2 − m 0 )]2 ·⎜⎜∑ + ∑ ⎟ + 0.25[2ln(m 0 )ln(2 − m 0 ) − (ln(m 0 )) − (ln(2 − m 0 )) ] k i k l − − l = j +1 2 2 ⎠ ⎝ i =1 2 2 ⎛ j ⎛⎛ k ⎛ j 2⎞ 2 2 ⎞⎞ 2 k ⎜∑ ⎜⎜ 1 1 ⎞⎟ ∑ ⎛⎜ 1 1 ⎞⎟ ⎟⎟ + 2[2ln(m )ln(2 − m ) − (ln(m ))2 − (ln(2 − m ))2 ]· ⎜∑ ⎛⎜ 1 1 ⎞⎟ + ∑ ⎛⎜ 1 1 ⎞⎟ ⎟ · 0 0 0 0 ⎜ ⎟ − − − k l k i k l ⎟ ⎜ ⎜⎝ 2 2k − i ⎠ ⎠ ⎠⎠ ⎝ ⎠ ⎟⎠ ⎝ l = j +1 2 2 l = j +1 2 2 ⎝ i =1 ⎝ 2 2 ⎠ ⎝ i =1 ⎝
(E[ln|ut |]2 − E[ln|ut |2 ])
+ 3E[ln|ut |2 ]2 + E[ln|ut |4 ] − 4E[ln|ut |3]E[ln|ut |].
(A8)
Appendix B. Lognormal case Sections B.1–B.6 provide the closed-form solutions of the six types of non-trivial moment conditions used in our GMM estimator in the Lognormal case. B.1
cov(Xt ,1, Yt ,1) = E[(Xt ,1 − E[Xt ,1])·(Yt ,1 − E[Yt ,1] = E[Xt ,1·Yt ,1] =
⎡ j j ⎞2 ⎤ 1 ⎢⎛ 1 i) ⎟ ⎥ E ⎜⎜∑ (εt(i ) − εt(−1 )⎟ + 2E[ln|u1, t |·ln|u2, t |] − 2E[ut ]2 = 0.5σε2 ∑ k − i + 2E[ln|u1, t |·ln|u2, t |] 4 ⎢⎣⎝ i =1 2 ⎠ ⎥⎦ i =1
− 2E[ut ]2 .
(B1)
This holds because non-zero entries occur when
E[(εt(i )
−
i) 2 εt(−1 )]
=
2(E[(εt(i ))2]
−
E[εt(i )]2 )
=
εt(i )
≠
i) εt(−1 ,
which implies:
2σε2
B.2
cov(Xt +1,1, Yt ,1) =
j j j ⎤ ⎛ 1 ⎞2 1 ⎡⎢ i) i) ⎥ E ∑ (εt(+1 − εt(i ))· ∑ (εt(i ) − εt(−1 ) + E[ut ]2 − E[ln|u1, t |·ln|u2, t |] = − 0.25σε2 ∑ ⎜ k − i ⎟ + E[ut ]2 − E[ln|u1, t |·ln|u2, t |] ⎝2 ⎠ ⎥⎦ 4 ⎢⎣ i =1 i =1 i =1
because non-zero entries occur when i) E[(εt(+1
−
εt(i ))·(εt(i )
−
i) εt(−1 )]
=
E[εt(i )]2
−
i) εt(+1
εt(i )
≠
=−
σε2
≠
E[(εt(i ))2]
i) εt(−1 ,
(B2)
which implies:
B.3
cov(Xt +1,1, Xt ,1) =
j j k ⎡ k ⎤ ⎤ 1 ⎡⎢ 1 i) i) l) l) ⎥ E ∑ (εt(+1 − εt(i ))· ∑ (εt(+1 − εt(i ))⎥ + E ⎢ ∑ (εt(+1 − εt(l ))· ∑ (εt(l ) − εt(−1 ) ⎥⎦ ⎥⎦ 4 ⎢⎣ i =1 4 ⎢⎣ l = j +1 i =1 l = j +1
⎡ j ⎛ 2 1 ⎞ = − 0.25σε2⎢∑ ⎜ k − i ⎟ ⎢⎣ i =1 ⎝ 2 ⎠
⎛ 1 ⎞2 ⎤ ⎜ k − l ⎟ ⎥ + E[ln|ut |]2 − E[ln|ut |2 ]. ⎠ ⎥⎦ ⎝ l = j +1 2
+ E[ln|ut |]2 − E[ln|ut |2 ]
k
∑
11
(B3)
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
B.4
E[Xt2,1·Yt2,1] =
⎡ j ⎡⎛ k ⎡ j ⎞2 ⎤ ⎞2 ⎤ ⎞2 ⎛ k ⎞2 ⎛ k ⎞4⎤ 1 ⎢⎛ 1 ⎢⎜ 1 ⎢⎛⎜ i) ⎟ ⎥ l) ⎟ ⎜ h) ⎟ ⎥ i) ⎟ ⎜ h) ⎟ ⎥ )⎟ + )⎟ ⎜ ∑ (εt(h) − εt(−1 )⎟ ⎥ + )⎟ ⎜ ∑ (εt(h) − εt(−1 )⎟ ⎥ E ⎜⎜∑ (εt(i ) − εt(−1 E ⎢⎜ ∑ (εt(l ) − εt(−1 E ⎢⎜∑ (εt(i ) − εt(−1 16 ⎢⎣⎝ i =1 ⎠ ⎝ h = j +1 ⎠ ⎥⎦ 16 ⎣⎝ l = j +1 ⎠⎦ ⎠ ⎦ 16 ⎣⎝ i =1 ⎠ ⎝ h = j +1 ⎧ ⎡ j ⎡ j ⎡⎛ k ⎞2 ⎤ ⎞ 2 ⎤⎫ ⎞2 ⎛ k ⎞2 ⎤ ⎪ 1 ⎢⎛⎜ 1 ⎪ ⎢⎛⎜ ⎢ (l ) ⎟ ⎥ (i ) ⎟ ⎥ (l ) ⎟ ⎥ (i ) ⎟ ⎜ (l ) (i ) (l ) (i ) ⎜ ⎨ + 2E ⎜∑ (εt − εt −1)⎟ + 2E ⎢⎜ ∑ (εt − εt −1)⎟ ⎥⎬·(2E[ln|ut |2 ] − 2E[ln|ut |]2 ) E ⎜∑ (εt − εt −1)⎟ ⎜ ∑ (εt − εt −1)⎟ ⎥ + ⎢⎝ ⎥ 16 ⎢⎝ i =1 4 ⎪ ⎠ ⎠ ⎠⎦ ⎝ l = j +1 ⎠ ⎦⎪ ⎦ ⎣ ⎣⎝ l = j +1 ⎩ ⎣ i =1 ⎭
+ 2E[(ln|u1, t |)2 ·(ln|u2, t |)2 ] − 8E[(ln|u1, t |)2 ·ln|u2, t |]·E[ln|ut |] + 4E[ln|u1, t |·ln|u2, t |]2 + 2E[(ln|ut |)2]2 j k ⎛ k j ⎛ k j ⎛ k ⎞ ⎞ ⎞ 1 1 1 1 1 1 1 = 0.75σε4 ∑ k − i + 0.25σε4 ∑ ⎜⎜ k − l ∑ k − h ⎟⎟ + 0.25σε4 ∑ ⎜⎜ k − i ∑ k − h ⎟⎟ + 0.25σε4 ∑ ⎜⎜ k − i ∑ k − l ⎟⎟ i =1 2 l = j +1 ⎝ 2 h = j +1 2 i =1 ⎝ 2 h = j +1 2 i =1 ⎝ 2 l = j +1 2 ⎠ ⎠ ⎠ j k ⎞ ⎛ 1 1 + 2σε2(E[ln|ut |2 ] − E[ln|ut |]2 )·⎜⎜∑ k − i + ∑ k − l ⎟⎟ + 2E[(ln|u1, t |)2 ·(ln|u2, t |)2 ] − 8E[(ln|u1, t |)2 ·ln|u2, t |]·E[ln|ut |] l = j +1 2 ⎠ ⎝ i =1 2 + 4E[ln|u1, t |·ln|u2, t |]2 + 2E[(ln|ut |)2 ]2 . For the first term
E[(εt(i )
−
i) 4 εt(−1 ) ]
=
j E[(∑i =1 (εt(i )
2E[εt(i )]4
+
−
i) 4 εt(−1 )) ],
6E[(εt(i ))2 ]2
This occurs with probability
1 2 k−i .
−
(B4)
E[(εt(i )
we begin with
8E[(εt(i ))3]E[εt(i )]
=
−
i) 4 εt(−1 ) ],
which for non-zero values gives:
12σε4.
Then we have the solution:
⎡⎛ j j ⎞4⎤ 1 i) ⎟ ⎥ E ⎢⎜⎜∑ (εt(i ) − εt(−1 )⎟ = 12σε4· ∑ k − i . ⎥ ⎢⎝ 2 ⎠ ⎦ ⎣ i =1 i =1
B.5
E[Xt2+1,1·Yt2,1] =
⎡ j ⎞2 ⎤ ⎞2 ⎛ j 1 ⎢⎛ i) i) ⎟ ⎥ E ⎜⎜∑ (εt(+1 − εt(i ))⎟⎟ ·⎜⎜∑ (εt(i ) − εt(−1 )⎟ 16 ⎢⎣⎝ i =1 ⎠ ⎥⎦ ⎠ ⎝ i =1
+
⎡⎛ k ⎞2 ⎤ ⎞2 ⎛ k 1 ⎢⎜ l) h) ⎟ ⎥ E ⎢⎜ ∑ (εt(+1 − εt(l ))⎟⎟ ⎜⎜ ∑ (εt(h) − εt(−1 )⎟ ⎥ 16 ⎝ l = j +1 ⎠⎦ ⎠ ⎝ h = j +1 ⎣ ⎡ j ⎞2 ⎤ ⎞2 ⎛ k 1 ⎢⎛⎜ ⎥ i) ⎟ ⎜ l) )⎟ ⎜ ∑ (εt(+1 − εt(l ))⎟⎟ ⎥ E ⎢⎜∑ (εt(i ) − εt(−1 16 ⎝ i =1 ⎠ ⎠ ⎝ l = j +1 ⎦ ⎣
+
⎡ j ⎞2 ⎤ ⎞2 ⎛ k 1 ⎢⎛⎜ i) h) ⎟ ⎥ − εt(i ))⎟⎟ ⎜⎜ ∑ (εt(h) − εt(−1 )⎟ ⎥ E ⎢⎜∑ (εt(+1 16 ⎝ i =1 ⎠ ⎝ h = j +1 ⎠⎦ ⎣
+
⎧ ⎡ j ⎡⎛ k ⎞ 2 ⎤⎫ ⎞2 ⎤ ⎪ 1 ⎪ ⎢⎛⎜ ⎢ i) ⎟ ⎥ l) ⎟ ⎥ ⎨2E ⎜∑ (εt(i ) − εt(−1 )⎟ + 2E ⎢⎜⎜ ∑ (εt(l ) − εt(−1 )⎟ ⎥⎬·(2E[ln|ut |2 ] − 2E[ln|ut |]2 ) ⎥ 4 ⎪ ⎢⎣⎝ i =1 ⎠⎦ ⎠ ⎦⎪ ⎣⎝ l = j +1 ⎩ ⎭
·E[ln|ut |] + 4E[ln|u1, t |·ln|u2, t |]E[ln|ut |]2 j
i) For the first term E[(∑i =1 (εt(+1 −
2
2
2
2
2 j εt(i ))) ·(∑i =1 (εt(i )
+
+ E[(ln|u1, t |)2 ·(ln|u2, t |)2 ] − 4E[(ln|u1, t |)2 ·ln|u2, t |]
+ 3E[ln|ut |2 ]2 − 4E[ln|ut |2 ]E[ln|ut |]2 . 2
i) − εt(−1 )) ], there are again three different possible forms:
2
2
i) i) i) i) i) i) − εt(i )) (εt(i ) − εt(−1 ) ] = E[εt4] + 3E[εt2]2 −4E[εt3]E[εt ] = 6σε4 − εt(i )) (εt(i ) − εt(−1 ) , has non-zero value only if εt(+1 ≠ εt(i ) ≠ εt(−1 . then E[(εt(+1 (1) (εt(+1 2⎤ 2 ⎡ ⎞ ⎛ j ⎛ 1 ⎞ 1 (E[εt3] = 3λσε2 + λ3 and E[εt4]=3σε4 + 6λ2σε2 + λ 4 ) and the probability of this occurance is ⎜ k − i ⎟ . Putting together we get ⎢∑i =1 ⎜ k − i ⎟ ⎥ ·6σε4 ⎢⎣ ⎝ 2 ⎠ ⎥⎦ ⎝2 ⎠
⎡
j) j) i) i) j) − εt(j )) ≠ εt(j ) and εt(i ) ≠ εt(−1 (2) (εt(+1 . since E ⎢⎣(εt(+1 − εt(j )) (εt(i ) − εt(−1 ) , does not equal zero for i ≠ j , εt(+1
2
i) 2⎤ (εt(i ) − εt(−1 ) ⎥ = 4E[(εt(i ))2]2 −8E[(εt(i ))2]E[εt(i )]2 + 4E[εt(i )]4 = 4σε4 , ⎦
together with the pertinent probabilities, its overall contribution yields:
⎡ j ⎛ ⎢∑ ⎜ 1 ⎢ ⎜⎝ 2k − i ⎣ i =1
j
∑ n =1, n ≠ i
⎞⎤ 1 ⎟⎥ 4 ·4σε 2k − n ⎟⎠⎥⎦
j) j) i) i) (3) (εt(+1 − εt(j ))(εt(+1 − εt(i ))(εt(j ) − εt(−1 )(εt(i ) − εt(−1 ), j) j) i) (εt(+1 − εt(j ))(εt(+1 − εt(i ))(εt(j ) − εt(−1 )(εt(i )
n) n) n = i, j and , εt(+1 ≠ εt(n) ≠ εt(−1 2 ⎞⎤ 2 ⎡ ⎛⎛ ⎞ ⎛ ⎞ j j 1 1 i) − εt(−1 ) = 4σε4 , we obtain a contribution 2⎢∑i =1 ⎜⎜⎜ k − i ⎟ ∑n =1, n ≠ i ⎜ k − n ⎟ ⎟⎟⎥ ·σε4 . 2 2 ⎢⎣ ⎠ ⎠⎥⎦ ⎝ ⎠ ⎝⎝
which
for
i≠j
Combining those three cases, we have the result:
⎡⎛ j j j ⎛ ⎞2 ⎤ ⎞2 ⎛ j ⎛ 1 ⎞2 1 i) i) ⎟ ⎥ E ⎢⎜⎜∑ (εt(+1 − εt(i ))⎟⎟ ·⎜⎜∑ (εt(i ) − εt(−1 )⎟ = 6σε4· ∑ ⎜ k − i ⎟ + 4σε4· ∑ ⎜⎜ k − i ⎥ ⎢⎝ ⎠ ⎝ 2 2 ⎠ ⎝ ⎠ ⎝ ⎦ ⎣ i =1 i =1 i =1 i =1
j
∑ n =1, n ≠ i
we arrive at:
12
j ⎛ ⎞ ⎛ 1 ⎞2 1 ⎟ + 2σε4· ∑ ⎜⎜⎜ k − i ⎟ ⎝2 ⎠ ⎠ i =1 ⎝
2k − n ⎟
⎛ 1 ⎞2 ⎞ ⎜ k − n ⎟ ⎟⎟ ⎝2 ⎠ ⎠ n =1, n ≠ i j
∑
are
non-zero,
Since
Economic Modelling (xxxx) xxxx–xxxx
R. Liu, T. Lux
E[Xt2+1,1·Yt2,1] =
⎡ j j ⎛ ⎛ 1 ⎞2 1 ⎢ 4 1 6σε · ∑ ⎜ k − i ⎟ + 4σε4· ∑ ⎜⎜ k − i ⎠ ⎝ 16 ⎢⎣ i =1 2 i =1 ⎝ 2 j
+ 0.25σε4 ∑ i =1
1 2k − i
k
1
∑
2k − h
h = j +1
j
∑ n =1, n ≠ i
j ⎛ ⎞ ⎞2 ⎛ 1 ⎟ 4 ⎜⎜ 1 ⎟ + 2 σ · ∑ ε ⎜ ⎟ k n k i − − ⎠ ⎝ 2 ⎠ i =1 ⎝ 2
j ⎛ 1 + 0.25σε4 ∑ ⎜⎜ k − i i =1 ⎝ 2
⎞ 1 ⎟ 2k − l ⎟⎠
k
∑ l = j +1
− 4E[(ln|u1, t |)2 ·ln|u2, t |]·E[ln|ut |] + 4E[ln|u1, t |·ln|u2, t |]E[ln|ut |]2
⎛ 1 ⎞2⎞⎤⎥ ⎜ k − n ⎟ ⎟⎟ ⎠ ⎠⎥⎦ ⎝ n =1, n ≠ i 2 j
k
∑
+ 0.25σε4
l = j +1
⎛ j 1 + 2σε2·(E[ln|ut |2 ] − E[ln|ut |]2 )·⎜⎜∑ k − i + ⎝ i =1 2
k
1
∑
2k − l k
∑ l = j +1
∑ h = j +1
⎞ 1 ⎟ 2k − l ⎟⎠
1 2k − h + E[(ln|u1, t |)2 ·(ln|u2, t |)2]
+ 3E[ln|ut |2 ]2 − 4E[ln|ut |2 ]E[ln|ut |]2
(B5)
B.6
E[Xt2+1,1·Xt2,1] =
⎡⎛ k ⎡ j ⎞2 ⎤ ⎞2 ⎛ k ⎞2 ⎤ ⎞2 ⎛ j 1 ⎢⎛ 1 ⎢⎜ i) i) ⎟ ⎥ i) i) ⎟ ⎥ − εt(i ))⎟⎟ ·⎜⎜∑ (εt(i ) − εt(−1 )⎟ + − εt(i ))⎟⎟ ·⎜⎜ ∑ (εt(i ) − εt(−1 )⎟ ⎥ E ⎜⎜∑ (εt(+1 E ⎢⎜ ∑ (εt(+1 16 ⎢⎣⎝ i =1 ⎠ ⎥⎦ 16 ⎣⎝ l = j +1 ⎠ ⎝ i =1 ⎠⎦ ⎠ ⎝ l = j +1 +
⎡ j ⎡ j ⎞2 ⎤ ⎞2 ⎤ ⎞2 ⎛ k ⎞2 ⎛ k 1 ⎢⎛⎜ 1 ⎢⎛⎜ ⎥ i) l) ⎟ ⎥ i) ⎟ ⎜ l) − εt(i ))⎟⎟ ⎜⎜ ∑ (εt(l ) − εt(−1 )⎟ ⎥ + )⎟ ⎜ ∑ (εt(+1 − εt(l ))⎟⎟ ⎥ E ⎢⎜∑ (εt(+1 E ⎢⎜∑ (εt(i ) − εt(−1 16 ⎝ i =1 16 ⎠ ⎝ ⎠ ⎠⎦ ⎝ l = j +1 ⎠⎦ ⎝ l = j +1 ⎣ i =1 ⎣
⎧ ⎡ j ⎡⎛ k ⎞ 2 ⎤⎫ ⎞2 ⎤ ⎪ 1 ⎪ ⎢⎛ i) l) ⎨2E ⎜⎜∑ (εt(+1 − εt(i ))⎟⎟ ⎥ + 2E ⎢⎢⎜⎜ ∑ (εt(+1 − εt(l ))⎟⎟ ⎥⎥⎬·(2E[ln|ut |2 ] − 2E[ln|ut |]2 ) ⎥ 4 ⎪ ⎢⎣⎝ i =1 ⎠ ⎝ ⎠ ⎦ ⎣ l = j +1 ⎦⎪ ⎩ ⎭ j j k j j ⎡ k ⎤ ⎤ ⎡ ⎤ 1 1 ⎡ i) i) ⎥ ⎢ l) l) ⎥ i) i) ⎥ + 4· E ⎢∑ (εt(+1 − εt(i )) ∑ (εt(i ) − εt(−1 ) E ∑ (εt(+1 − εt(l )) ∑ (εt(l ) − εt(−1 ) + 4· E ⎢∑ (εt(+1 − εt(i )) ∑ (εt(i ) − εt(−1 ) · ⎥⎦ ⎥⎦ ⎥⎦ ⎢⎣ l = j +1 16 ⎢⎣ i =1 4 ⎢⎣ i =1 i =1 l = j +1 i =1
+
k ⎡ k ⎤ 1 l) l) ⎥ + 4· E ⎢ ∑ (εt(+1 − εt(l )) ∑ (εt(l ) − εt(−1 ) ·(E[ln|ut |]2 − E[ln|ut |2 ]) ⎢ ⎥⎦ 4 ⎣ l = j +1 l = j +1
(E[ln|ut |]2 − E[ln|ut |2 ])
E[ln|ut |] =
⎡ j j ⎛ ⎛ 1 ⎞2 1 ⎢ 4 1 6σε · ∑ ⎜ k − i ⎟ + 4σε4· ∑ ⎜⎜ k − i ⎠ ⎝ 16 ⎢⎣ i =1 2 i =1 ⎝ 2
⎡ k k ⎛ ⎛ 1 ⎞2 1 ⎢ 4 1 + 6σε · ∑ ⎜ k − l ⎟ + 4σε4· ∑ ⎜⎜ k − l ⎠ ⎝ 16 ⎢⎣ l = j +1 2 l = j +1 ⎝ 2 + 0.25σε4
⎛ 1 ⎜⎜ k−l l = j +1 ⎝ 2 k
j
∑
∑
⎛ j ⎛ 2 1 ⎞ − σε2·⎜⎜∑ ⎜ k − i ⎟ + ⎝ i =1 ⎝ 2 ⎠
i =1
1 ⎞⎟ ⎟ 2k − i ⎠
j
∑ n =1, n ≠ i
k
∑ h = j +1, n ≠ l
⎛ j 1 + 2σε2⎜⎜∑ k − i + ⎝ i =1 2
j ⎛ ⎞ ⎞2 ⎛ 1 ⎟ 4 ⎜⎜ 1 ⎟ + 2 · σ ∑ ε ⎜⎝ 2k − i ⎠ 2k − n ⎟⎠ i =1 ⎝
k ⎛ ⎞ ⎞2 ⎛ 1 ⎟ 4 ⎜⎜ 1 ⎟ + 2 · σ ∑ ε ⎜⎝ 2k − l ⎠ 2k − h ⎟⎠ l = j +1 ⎝ k
∑ l = j +1
⎛ 1 ⎞2 ⎞⎤⎥ ⎜ k − n ⎟ ⎟⎟ ⎝ ⎠ ⎠⎥⎦ n =1, n ≠ i 2 j
∑ k
∑ h = j +1, h ≠ l
⎞ 1 ⎟ ·(E[ln|ut |2 ] − E[ln|ut |]2 ) 2k − l ⎟⎠
⎛ 1 ⎞2 ⎞ ⎜ k − l ⎟ ⎟⎟ ·(E[ln|ut |]2 − E[ln|ut |2 ]) ⎠⎠ ⎝ l = j +1 2
+ 3E[ln|ut |2 ]2 + E[ln|ut |4 ] − 4E[ln|ut |3]
⎤ ⎛ 1 ⎞ 2 ⎞⎥ ⎜ k − h ⎟ ⎟⎟ ⎝ 2 ⎠ ⎠⎥⎦
j ⎛ 1 + 0.25σε4 ∑ ⎜⎜ k − i i =1 ⎝ 2
k
∑ l = j +1
⎞ 1 ⎟ 2k − l ⎟⎠
k ⎛⎛ 2 ⎛ 1 ⎞2 ⎞ 1 ⎞ + 0.25σε4 ∑ ⎜⎜⎜ k − i ⎟ · ∑ ⎜ k − l ⎟ ⎟⎟ ⎠ l = j +1 ⎝ 2 ⎠ ⎠ ⎝ i =1 ⎝ 2 j
k
∑
Note that the first term is identical with the first one of case
+ 3E[ln|ut |2 ]2 + E[ln|ut |4 ] − 4E[ln|ut |3]E[ln|ut |]. (B6)
E[Xt2+1,1,
Yt2,1].
cascades. Phys. Rev. E 85, 046114. Liesenfeld, R., Richard, J., 2003. Univariate and multivariate stochastic volatility models: estimation and diagnostics. J. Empir. Financ. 10, 505–531. Liu, R., Lux, T., 2014. Non-homogeneous volatility correlations in the bivariate multifractal model. Eur. J. Financ. 21, 971–991. Liu, R., 2008. Multivariate Multifractal Models: Estimation of Parameters and Application to Risk Management (Ph.D. thesis). University of Kiel. Lo, A.W., 1991. Long-term memory in stock market prices. Econometrica 59, 1279–1313. Lobato, I.N., Savin, N.E., 1998. Real and spurious long-memory properties of stock market data. J. Bus. Econ. Stat. 16, 261–283. Lux, T., 2004. Detecting multifractal properties in asset returns the failure of the scaling estimator. Int. J. Mod. Phys. 15, 481–491. Lux, T., 2008. The Markov-switching multifractal model of asset returns: GMM estimation and linear forecasting of volatility. J. Bus. Econ. Stat. 26, 194–210. Lux, T., Morales-Arias, L., 2010. Forecasting volatility under fractality, regime-switching, long memory and student-t innovations. Comput. Stat. Data Anal. 54, 2676–2692. Lux, T., Segnon, M., 2016. Multifractal models in finance: their origin, properties, and applications. In: Chen, S.-H., Kaboudan, M., (eds.), OUP Handbook on Computational Economics and Finance. Oxford University Press. (in press). Mandelbrot, B., 1974. Intermittent turbulence in self similar cascades: divergence of high moments and dimension of carrier. J. Fluid Mech. 62, 331–358. Mandelbrot, B., Fisher, A., Calvet, L., 1997. Cowles Foundation Discussion Papers. Yale University. nos. 1164–1166. Available at 〈http://www.ssrn.com〉. Ramsey, J., 2002. Wavelets in economics and finance: past and future. Stud. Nonlinear Dyn. Econ. 6 (3), 1–29. Tsay, R., 2006. Multivariate volatility models, In: Ho, C.H., Ing, C.K., Lai, T.L., (eds.), Time Series and Related Topics, IMS Lecture Notes. pp. 210–222. Zumbach, G., 2004. Volatility processes and volatility forecast with long memory. Quant. Financ. 4, 70–86.
References Bacry, E., Delour, J., Muzy, J.-F., 2000. A Multivariate Multifractal Model for Return Fluctuations. Quantitative Finance Papers. arXiv:condmat/0009260v1. Barunik, J., Kocenda, E., Vucha, L., 2016. Gold, oil, and stocks: dynamic correlations. Int. Rev. Econ. Financ. 42, 186–201. Bauwens, L., Laurent, S., Rombouts, J., 2006. Multivariate GARCH models: a survey. J. Appl. Econ. 21, 79–109. Beran, J., 1994. Statistics for Long-Memory Processes. Chapman & Hall, New York. Bollerslev, T., 1990. Modelling the coherence in short-run nominal exchange rates:a multivariate generalized ARCH model. Rev. Econ. Stat. 72, 498–505. Calvet, L., Fisher, A., 2001. Forecasting multifractal volatility. J. Econometr. 105, 27–58. Calvet, L., Fisher, A., 2004. Regime switching and the estimation of multifractal processes. J. Financ. Econometr. 2, 49–83. Calvet, L., Fisher, A., Thompson, S., 2006. Volatility comovement: a multifrequency approach. J. Econometr. 131, 179–215. Ding, Z., Engle, R., Granger, C., 1993. A long memory property of stock market returns and a new model. J. Empir. Financ. 1, 83–106. Hansen, L.P., 1982. Large sample properties of generalized method of moments estimators. Econometrica 50, 1029–1054. Harris, D., Mátyás, L., 1999. Generalized Method of Moments Estimation. University Press, Cambridge. Harte, D., 2001. Multifractals: Theory and Applications. Chapman and Hall, London. Harvey, D., Leybourne, S., Newbold, P., 1998. Tests for forecast encompassing. J. Bus. Econ. Stat. 16, 254–259. Idier, J., 2011. Long term vs. short term comovements in stock markets. The use of markovswitching multifractal models. Eur. J. Financ. 17, 27–48. Leovey, A., Lux, T., 2012. Parameter estimation and forecasting for multiplicative log-normal
13