Weighted multiscale cumulative residual Rényi permutation entropy of financial time series

Weighted multiscale cumulative residual Rényi permutation entropy of financial time series

Journal Pre-proof Weighted multiscale cumulative residual Rényi permutation entropy of financial time series Qin Zhou, Pengjian Shang PII: DOI: Refer...

4MB Sizes 1 Downloads 54 Views

Journal Pre-proof Weighted multiscale cumulative residual Rényi permutation entropy of financial time series Qin Zhou, Pengjian Shang

PII: DOI: Reference:

S0378-4371(19)31742-X https://doi.org/10.1016/j.physa.2019.123089 PHYSA 123089

To appear in:

Physica A

Received date : 27 June 2019 Revised date : 9 September 2019 Please cite this article as: Q. Zhou and P. Shang, Weighted multiscale cumulative residual Rényi permutation entropy of financial time series, Physica A (2019), doi: https://doi.org/10.1016/j.physa.2019.123089. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2019 Published by Elsevier B.V.

*Cover letter and Highlights

Journal Pre-proof Highlights •

We propose a modified Rényi permutation entropy measure based on cumulative residual entropy.



The multiscale method assists in detecting more abundant temporal structure of time series. The idea of weighting is introduced into our method.



We investigate the complexity of synthetic data and financial stock data.



Our method effectively distinguishes stock markets of different areas.

Jo

urn

al

Pr e-

p ro

of



Journal Pre-proof *Manuscript Click here to view linked References

1

∗ 1

and Pengjian Shang1

p ro

Qin Zhou

of

Weighted multiscale cumulative residual R´enyi permutation entropy of financial time series

Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing 100044, China

Pr e-

Abstract

urn

al

In this paper, based on cumulative residual entropy (CRE) and R´enyi permutation entropy (RPE), multiscale cumulative residual R´enyi permutation entropy (MCRRRPE) and weighted multiscale cumulative residual R´enyi permutation entropy (WMCRRPE) are proposed as novelty measures to quantify the uncertainty of the nonlinear time series. First, the MCRRPE and WMCRRPE methods are performed on the synthetic data, and the impact of the changes to parameters is discussed. Then, the MCRRPE and WMCRRPE methods are applied to the closing prices of the US, European and Chinese stock markets. We analyze the statistics and draw some conclusions from the comparison: the stock markets can be divided into four categories: (1) NASDAQ and FTSE100, (2) S&P500, DJI and HSI, (3) DAX30 and CAC40 and (4) SSE and SZSE. The standard deviation has a certain relationship with WMCRRPE. Compared with the MCRRPE method, the WMCRRPE method can distinguish these financial stock markets effectively and better estimate the complexity of the time series containing amplitude information. Keywords: Cumulative residual R´enyi permutation entropy, Weight, Multiple scales, Financial time series

Introduction

Jo

1

Time series analysis is a method of data analysis and processing for predicting future trends based on the law of changes in the past. Over the last years, time series analysis has been widely used in business, finance, economics and other fields. Yule proposed the Auto Regressive (AR) model in 1927 [1], while Walker proposed the Moving Average (MA) model and the Autoregressive Integrated Moving Average (ARMA) model in 1931 [2]. These three models form the basis of modern time series analysis. In 1970, Box and Jenkins systematically elaborated on the method of the Autoregressive Integrated Moving Average (ARIMA) model on the basis of summarizing the previous models [3]. These models are mainly used in ∗

Corresponding author, e-mail:[email protected], telephone:+86 18301006106

1

Journal Pre-proof

Jo

urn

al

Pr e-

p ro

of

linear models of univariate and homoscedasticity. Along with the development in time series analysis, it is found that classical models have many limitations in theory and application. Therefore, model methods in more general cases are proposed to study time series, e.g., heteroscedasticity [4-7], multivariate [8-9], nonlinear [10-13], etc. In order to capture the uncertainty and disorder of time series, Burg proposed the maximum entropy (ME) method for spectral analysis [14]. Later, various entropies were proposed to measure the complexity of time series, including approximate entropy [15], sample entropy [16], permutation entropy [17-18], cumulative residual entropy [19], transfer entropy [20], etc. The time series of the stock market is particularly complicated, it operates on multiple time scales with typical features of non-stationarity and nonlinearity. Classical entropy analysis is based on a single scale and can not describe the complexity of the system well. Therefore, many researchers noted the importance of multiple-scale analysis of time series. Among them, Costa et al. proposed the multiscale entropy method to explain the fact that multiple time scales are inherent in biological systems [21-23]. Since then, multiscale entropy analysis has been widely used in many fields such as physiological systems [24-25], communication networks [26], and image processing [27]. Recently, the application of time series analysis in financial markets is a hot spot of concern [28-31]. Inspired by the article [32], Yin et al. proposed multiscale permutation entropy (MPE) and weighted multiscale permutation entropy (WMPE) [33]. The sample entropy [16] is replaced by permutation entropy [17], which preserves the amplitude information of the nonlinear time series well. The proposed scheme can detect sudden changes in the signal and assign less complexity to the regular or noise-affected segments, with the advantage of noise immunity. Chen et al. extended MPE and WMPE to multiscale R´enyi permutation entropy (MRPE) and weighted multiscale R´enyi permutation entropy (WMRPE) in 2018 [34]. R´enyi entropy was introduced by R´enyi in 1961 and it is a generalized form of Shannon entropy [35]. The R´enyi entropy contains Shannon entropy as a special case, that is, when the parameter q of the R´enyi entropy is equal to 1, the R´enyi entropy degenerates into the Shannon entropy. In 2003, Rao et al. proposed a new information metric called cumulative residual entropy (CRE) [19]. It is a form of distribution function defined in integral form, which is easy to measure. The advantage of the proposed method is its better noise immunity with a larger convergence range. After that, Asadi et al. extend the cumulative residual entropy to a dynamic situation [36]. In this paper, we extend MRPE and WMRPE to multiscale cumulative residual R´enyi permutation entropy (MCRRPE) and weighted multiscale cumulative residual R´enyi permutation entropy (WMCRRPE). When the parameter q of R´enyi entropy is equal to 1, we directly study Shannon permutation entropy instead of cumulative residual Shannon permutation entropy for comparison. The remainder of the paper is as follows. In Section 2 we introduce multiscale cumulative residual R´enyi permutation entropy and weighted multiscale cumulative residual R´enyi permutation entropy. Section 3 describes the database we use in our work. In Section 4, we present analysis and results of US, European and Chinese stock market under MCRRPE and WMCRRPE, then compare the similarities and differences and make an evaluation. Finally, we summarize the methods and results in Section 5.

2

Journal Pre-proof

2 2.1

Methodology Multiscale cumulative residual R´ enyi permutation entropy

js X

xi

(1)

p ro

1 yj = s

of

Given a discrete time series {xi }N i=1 , we use non-overlapping windows of the increasing length s to average the data points, and then construct multiple coarse-grained time series {yj }M j=1 [32]. The generated time series is as follows:

i=(j−1)s+1

al

Pr e-

where s denotes the scale factor and 1 ≤ j ≤ [N/s]. The length of the coarse-grained time series equals the rounded length of original time series divided by s. Then, CRRPE is calculated as a function of s based on each of the coarse-grained time series. It is important to note that multiscale attributes can only be found in sufficiently long data. Next, a coarse-grained time series {yj }M j=1 and its time-delay embedding expression  m,τ Yk = yk , yk+τ , . . . , yk+(m−1)τ for k = 1, 2, . . . , M − (m − 1)τ are considered, where m and τ represent the embedding dimension and time delay, respectively. In order to calculate CRRPE, we assign each T = M − (m − 1)τ to a single motif in m! possible ones, which represent all unique sorts of m different numbers. Here we put m! possible motifs into one set Π = {πlm,τ }m! l=1 . Combined with cumulative residual entropy under the condition of the discrete, CRRPE is defined as cumulative residual R´enyi entropy of m! distinct symbols {πlm,τ }m! l=1 , denoted as: !q ! m! m! X X 1 log p (πim,τ ) (2) ε(m, τ, q) = 1−q l=1 i=l

urn

where q ≥ 0 and q 6= 1. When q = 1, Eq.(2) can be replaced by ! ! m! m! m! X X X p (πim,τ ) log p (πim,τ ) . ε(m, τ, 1) = − l=1

i=l

(3)

i=l

p (πim,τ ) is defined as P m,τ ) k≤T 1u:type(u)=πi (Yk m,τ p (πi ) = P m,τ ) k≤T 1u:type(u)∈Π (Yk

Jo

(4)

where 1A (u) represents the indicator function of set A, defined as 1A (u) = 1 if u ∈ A and 1A (u) = 0 if u ∈ / A. We will give an example of how to extract motifs from the original time series. Given a time series with 6 values: Q = {7, 2, 9, 6, 8, 4}, two parameters should be determined, namely, the embedding dimension m and the time delay τ . Here we fix m = 3 and τ = 1, where four different three-dimensional vectors are associated with Q. The first vector is (x1 , x2 , x3 ) = (7, 2, 9), and (213) will be the ordinal pattern because x2 ≤ x1 ≤ x3 . The second vector (x1 , x2 , x3 ) = (2, 9, 6) corresponds to the permutation (132) since x1 ≤ x3 ≤ x2 . Similarly, 3

Journal Pre-proof

1 log (1 − q)2

l=1

p (πim,τ )

i=l

!q !

+

 q P m! m,τ m,τ p (π ) p (π ) log i i i=l l=1 i=l P P q  m! m! m,τ ) (1 − q) l=1 i=1 p (πi

Pm! Pm!

(5)

Pr e-

ε0 (m, τ, q) =

m! m! X X

p ro

of

the other two vectors are mapped to (312) and (231), respectively [17]. By calculating the relative frequencies of m! possible permutations πlm,τ , we obtain the motif probability distribution, P = {p (πlm,τ ) , l = 1, 2, . . . , m!}. After that, the density function is replaced by the cumulative distribution function (CDF) in the definition of R´enyi entropy. Let p0l = Pm! m,τ ), l = 1, 2, . . . , m!. Then we obtain P 0 = {p0l , l = 1, 2, . . . , m!} and i=l pi , where pl = p (πl compute the multiscale R´enyi entropy with P 0 . The above procedure is called multiscale cumulative residual R´enyi permutation entropy (MCRRPE) method. When the scale factor s = 1, it is easy to obtain the fact that MCRRPE is only cumulative residual R´enyi permutation entropy (CRRPE). When q is equal to 1, ε(m, τ, q) degenerates to multiscale cumulative residual permutation entropy (MCRPE), defined as Eq.(3). Here we set s = 1, MCRPE degenerates to cumulative residual permutation entropy (CRPE). The fact that ε(m, τ, q) is monotone increasing when 0 ≤ q < 1 and q > 1 can also be proven. Take the derivative of ε(m, τ, q) with respect to q, and we get:

which is always greater than zero where q ≥ 0 and q 6= 1. We find that ε(m, τ, q) is greater than zero when 0 ≤ q < 1 and less than zero when q > 1. Assuming two sets: W = {wi |0 ≤ wi < 1, wi ≤ wi+1 , i = 1, 2, . . . , 10} and T = {tj |tj > 1, tj ≤ tj+1 , j = 1, 2, . . . , 10}, here comes log(m!) = ε0 ≤ εw1 ≤ εw2 ≤ . . . ≤ εw10 and |εt1 | ≥ |εt2 | ≥ . . . ≥ |εt10 |. Regardless of the probability distribution, all cumulative residual R´enyi permutation entropies are equal as ε(m, τ, q) ≡ log(m!) if q = 0.

Motif no.2

Jo

urn

Motif no.1

Possible Patterns

al

Possible Patterns

Fig. 1. Two examples of possible m-dimensional vectors corresponding to the same motif when m = 3. However, MCRRPE has some constraints that it can not distinguish between distinct patterns of a certain motif and lacks sensitivity to noise. In the previous example, (9, 6, 8) and (8, 4, 5) corresponding to the same motif provide equal contributions to probability p (312), but they are different sub-vectors. The main problem here is that there retains no information other than the order structures when extracting for each time series. In reality, 4

Journal Pre-proof

we should take into account the fluctuations of the motifs. Fig. 1 shows how the same motif can be obtained from different m-dimensional vectors.

Weighted multiscale cumulative residual R´ enyi permutation entropy

of

2.2

Pr e-

p ro

In order to address those constraints, we propose a modification of MCRRPE—weighted multiscale cumulative residual R´enyi permutation entropy (WMCRRPE). Our goal is to preserve the amplitude information carried by the time series, and the proposed procedure helps to incorporate significant information into the extracted ordinal patterns. Here we summarize the WMCRRPE procedure in the following steps. First, Eq.(1) generates coarse grained time series. We then use the weight of each coarse grained time series to calculate WMCRRPE as a function of s. For each motif, we calculate the weighted relative frequencies of each coarse-grained time series as follows: P m,τ ) ωk k≤T 1u:type(u)=πi (Yk m,τ P . (6) pω (πi ) = m,τ ) ωk k≤T 1u:type(u)∈Π (Yk

Next, Eq.(6) is used to calculate the cumulative residual R´enyi permutation entropy with weight. We define it as !q ! m! m! X X 1 εω (m, τ, q) = log pω (πim,τ ) (7) 1−q l=1 i=l

l=1

al

where q ≥ 0 and q 6= 1. When q = 1, Eq.(7) can be replaced by ! ! m! m! m! X X X m,τ m,τ pω (πi ) . εω (m, τ, 1) = − pω (πi ) log

(8)

i=l

i=l

Jo

urn

If the scale factor s = 1, WMCRRPE is just weighted cmumulative residual R´enyi permutation entropy (WCRRPE). When q is equal to 1, εω (m, τ, q) degenerates to weighted multiscale cumulative residual permutation entropy (WMCRPE), defined as Eq.(8). Here we fix s = 1, WMCRPE degenerates to weighted cumulative residual permutation entropy (WCRPE). Besides, when ωk ≡ β, ∀k ≤ N and β > 0, WMCRRPE degrades to MCRRPE. After computing p (πim,τ ), we add weights to the probability values. The selection of the weight value ωk is equivalent to P choosing a feature or a combination of features from each m,τ vector Yk , and the relationship i pω (πim,τ ) = 1 is still satisfied. In this paper, the variance of each neighbor vector Ykm,τ to compute the weights. Let Ykm,τ be the arithmetic mean of Ykm,τ , that is m

Ykm,τ =

1 X yk+(t−1)τ . m t=1

(9)

Therefore, each weight value can be expressed as m i2 1 Xh yk+(t−1)τ − Ykm,τ . ωk = m t=1 5

(10)

Journal Pre-proof

3.1

Data Synthetic data

Pr e-

3

p ro

of

We weight differently the neighbor vectors with the same motifs but different amplitude variations, and the improved p (πim,τ ) can be regarded as the proportion of the variance of each neighbor vector taken into account. By modification, WMCRRPE retains most of the properties of MCRRPE, including its invariance under the affine linear transformations, while better detecting sudden changes in the signals. In addition, WMCRRPE presents an important characteristic, since it incorporates amplitude information and shows greater noise robustness. The MCRRPE and WMCRRPE procedures contain parameters including embedding dimension m, scale factor s, time delay τ and the order of the R´enyi permutation entropy. These parameters need to be reasonably chosen, and their different combinations can be used to detect different properties of the time series. The choice of the value m depends on the length M of the time series while the constraint m! << M needs to be satisfied, otherwise the errors may increase because of the finite size. Bandt and Pompe recommend m = 3, 4, . . . , 7 for pratical purpose [17].

1 0.9

Sample-Logistic

urn

0.8

al

Here we use logistic map xt+1 = rxt (1 − xt ) to generate 10000 samples, where r is a parameter and satisfies condition 3.5 ≤ r ≤ 4 [15]. In this paper, we set r = 3.8 and x0 = 0.5. Let sample a denote the generated 10000 samples, and sample b denotes sample a added with white Gaussian noise with zero mean and unit variance. Fig. 2a shows sample a and Fig. 2b shows sample b.

0.7 0.6 0.5 0.4 0.3 0.2

Jo

0.1 0

1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Fig. 2a. Sample a generated by logistic map.

3.2

Financial stock data

In this paper, the financial stock data analyzed includes the following nine indices: three US stock indices: Standard & Poor’s 500 (S&P500), Dow Jones Indexes (DJI) and Nasdaq Indexes (NASDAQ), three European stock indices: Financial Times Stock Exchange 100 6

Journal Pre-proof

5

Sample-Logistic+white Gaussion noise

4 3 2 1

of

0 -1 -2 -4 0

p ro

-3

1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Fig. 2b. Sample b : sample a added with noise.

Analysis and results

urn

4

al

Pr e-

Index (FTSE100), Deutscher Aktienindex 30 (DAX30) and Cotation Assist´ee en Continu 40 (CAC40), and three Chinese stock indices: Shanghai Stock Exchange Composite Index (SSE), Shenzhen Component Index (SZSE) and Hang Seng Index (HSI). Note that stock markets in different regions may have different opening dates. To better study the properties, we exclude the asynchronous data and then reconnect the rest of the original time series to get the same length sequences. We choose the closing prices from January 1, 2001, to October 15, 2017, with a total record of 3843 days. Let xt be the closing price of stock markets on day t, and the daily price return, gt , is calculated as the logarithmic difference of closing prices between the day and the previous day, gt = log (xt ) − log (xt−1 ). When t = 1, gt = log (xt ). Then, we normalize t> , where < gt > is the average of the series {gt } the daily price return, defined by Rt = gt −
4.1 4.1.1

Jo

In this section, we first use MCRRPE and WMCRRPE methods in synthetic data and discuss the effects of different parameter values. Then the methods are applied to the closing prices of nine stock markets in different regions. We compare the similarities and differences between the nine stock markets and show the advantages of the proposed methods.

MCRRPE and WMCRRPE analysis on synthetic data Analysis of the variable q

First of all, we perform a simple analysis of the variable q and calculate MCRRPE and WMCRRPE in sample a and sample b on embedding dimension m from 2 to 7. Here we set the scale factor s = 1 and the time delay τ = 1, and then MCRRPE and WMCRRPE are degenerated to CRRPE and WCRRPE, respectively. It should be emphasized that when q = 1, we directly plot permutation entropy instead of cumulative residual permutation entropy in order to make a comparison of (W)MPE and the proposed methods. Fig. 4a 7

Journal Pre-proof

S&P500

DJI

NASDAQ

15

15

15

10

10

10

5

5

5

0

0

0

-5

-5

-5

-10

-10

-10

-15 1000

2000

3000

4000

-15 0

1000

FTSE100 15

10

10

5

5

0

0

-5

-5

-10

-10

-15

-15 1000

2000

3000

4000

DAX30

15

0

2000

3000

4000

5

5

0

0

-5

-5

-10 1000

2000

3000

4000

4000

3000

4000

-10 -15

1000

2000

3000

4000

0

1000

2000

HSI

15 10

5 0

-5

-15 0

3000

CAC40

0

-10

-15

4000

-5

Pr e-

10

3000

5

SZSE 15

2000

10

SSE 10

1000

15

0

15

0

p ro

0

of

-15

0

1000

2000

3000

4000

-10 -15 0

1000

2000

Fig. 3. The daily price returns for nine stock indices. (The first row shows three US stock indices, the second row shows three European stock indices and the third row shows three Chinese stock indices.)

al

shows the CRRPE and WCRRPE results of sample a, and Fig. 4b shows the CRRPE and WCRRPE results of sample b. MCRRPE&WMCRRPE-Logistic

200 150

Entropy value

urn

100

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

50 0

-50

-100

Jo

-150 -200

0.2

0.6

1

1.4

1.8

q

Fig. 4a. CRRPE versus WCRRPE results in the case of sample a.

As can be seen from Fig. 4a, WCRRPE is almost equal to CRRPE. The closer q is to 1, the greater the absolute value of entropy is. The fact is that the entropy value approaches infinity. When q is close to 1, differences between CRRPE and WCRRPE become larger if we determine m = 2. Nevertheless, we can hardly observe the differences in the same situation when m > 2. Obviously, the absolute value of entropy increases with the increase 8

Journal Pre-proof

MCRRPE&WMCRRPE-Logistic+noise

200

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

150

50 0

of

Entropy value

100

-50 -100

-200

0.2

0.6

1

p ro

-150 1.4

q

1.8

Fig. 4b. CRRPE versus WCRRPE results in the case of sample b.

urn

al

Pr e-

of m, indicating that higher embedding dimension means larger entropy values. In the same interval of parameter q, higher embedding dimension m results in a wider range of entropy values. Furthermore, the PE value is different from CRRPE values with different q and the WPE value is different from WCRRPE values with different q, which point out the superiority of using CRRPE and WCRRPE methods since they can amplify the difference of entropy value under different embedding dimension m. Similar results can be obtained from Fig. 4b. It should be noted that the difference between CRRPE and WCRRPE becomes smaller after adding white Gaussian noise. When q is close to 1, we can hardly observe the difference between CRRPE and WCRRPE even if m = 2. Next, we address the specifics of the difference between CRRPE and WCRRPE. Fig. 5a shows the difference in entropy values between CRRPE and WCRRPE for sample a with different q values, and Fig. 5b shows the difference in entropy values between CRRPE and WCRRPE for sample b with different q values. 3

Logistic m=2 m=3 m=4 m=5 m=6 m=7

1 0

-1

Jo

Difference

2

-2

-3 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

Fig. 5a. The difference in entropy values between CRRPE and WCRRPE in the case of sample a. Fig. 5a shows that when m = 2, the difference in entropy values between CRRPE and WCRRPE for sample a is bigger than it under the other embedding dimensions with the 9

Journal Pre-proof

Logistic+noise

3

m=2 m=3 m=4 m=5 m=6 m=7

1 0

of

Difference

2

-1

-3 0.2

0.4

0.6

0.8

1

q

p ro

-2

1.2

1.4

1.6

1.8

Fig. 5b. The difference in entropy values between CRRPE and WCRRPE in the case of sample b.

4.1.2

Jo

urn

al

Pr e-

same parameter q. The closer q is to 1, the bigger the difference in entropy values between CRRPE and WCRRPE is. As can be seen from Fig. 5b, there is little difference between CRRPE and WCRRPE for sample b with parameter q ranging from 0.2 to 0.8 and 1.2 to 1.8, respectively. Furthermore, the difference in entropy values between CRRPE and WCRRPE is much smaller in the case of sample b than it in the case of sample a. When q = 1, the point on the graph shows the difference in entropy values between PE and WPE. Here we observe that WPE is always less than PE, indicating that WPE contains more information. Moreover, the difference in entropy values between PE and WPE increases with the increase of embedding dimension m. From Figs. 4a-4b, we observe that the absolute values of entropy corresponding to different parameter q of R´enyi entropy are approximately symmetrical with respect to q = 1. Therefore, we study the symmetries of the absolute values of entropy with parameter q. Here we determine scale s = 1 and time delay τ = 1. In the interval [0.2, 0.98] and [1.02, 1.8], 40 discrete points are uniformly taken, respectively. We calculate the difference in the absolute values of entropy with q in two intervals, and then plot as a function of the parameter q. The differences in absolute WMCRRPE results for sample a and sample b are shown in Figs. 6a-7b. In Figs. 6a-7b, we can see that the differences in absolute WMCRRPE results are small with two points q on the symmetry of q = 1. Obviously, the differences show a monotone decrease as the two points q on the symmetry of q = 1 get closer. When the embedding dimension m is enlarged from 2 to 7, the differences in absolute WMCRRPE results increase corresponding to the same two points q. We have similar observations for the differences in the absolute MCRRPE results in appendix A. Analysis of the variable s

We take MCRRPE and WMCRRPE results for sample a and sample b with various embedding dimension m from 2 to 7, parameter q = 0.98 and time delay τ = 1 as an example. Fig. 8a shows the MCRRPE and WMCRRPE results in the case of sample a, and

10

Journal Pre-proof

Logistic

Logistic

0.528 0.526

15

12

0.524

10

10

8

5

0.522

of

14

difference

difference

16

20

0.52

6

0.518

0 1

4

1 0.8

1.4

0.6

1.6

0.514

0.4 1.8

q>1

0.516 2

p ro

1.2

0.2

1

q<1

4

7

10 13 16 19 22 25 28 31 34 37 40

q

Fig. 6a. The difference of absolute WMCRRPE results in the case of sample a on m = 2. Logistic+noise

Logistic+noise

0.465

Pr e-

20 18

0.464

25

16 14

15

difference

difference

20

12 10

10

8

5

0.463

0.462

0.461

6

0 1

4

1.2

0.46

1

0.8

1.4

0.6

1.6

0.459

0.4 1.8

0.2

1

q<1

4

7

10 13 16 19 22 25 28 31 34 37 40

q

al

q>1

2

Fig. 6b. The difference of absolute WMCRRPE results in the case of sample b on m = 2.

urn

Logistic

200 100

350

0.96 300 250 200 150

1 0.8

1.4

1.8

0.2

0.94

100

1.2

1.6

0.95

0.93

0 1

q>1

difference

300

Jo

difference

400

Logistic

0.97

0.92 50

0.6

0.91

0.4

1

q<1

4

7

10 13 16 19 22 25 28 31 34 37 40

q

Fig. 7a. The difference of absolute WMCRRPE results in the case of sample a on m = 7. Fig. 8b shows the MCRRPE and WMCRRPE results in the case of sample b. As can be seen from Fig. 8a, the WMCRRPE results have little difference with the 11

Journal Pre-proof

Logistic+noise

Logistic+noise

1.06 350

1.05 300 250

200

200

100

150

0 1

100

1.04

1.03

of

300

difference

difference

400

1.02

1.2

1 0.6

1.6

1

0.4 1.8

q>1

1.01 50

p ro

0.8

1.4 0.2

1

q<1

4

7

10 13 16 19 22 25 28 31 34 37 40

q

Fig. 7b. The difference of absolute WMCRRPE results in the case of sample b on m = 7. MCRRPE&WMCRRPE-Logistic

Entropy value

300

200

100

0

5

10

15

al

0

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

Pr e-

400

20

25

30

Scale

Fig. 8a. MCRRPE versus WMCRRPE results in the case of sample a. MCRRPE&WMCRRPE-Logistic+noise

urn

400

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

Entropy value

300

200

Jo

100

0

0

5

10

15

20

25

30

35

40

Scale

Fig. 8b. MCRRPE versus WMCRRPE results in the case of sample b. MCRRPE results at the the same embedding dimension on the same scale for sample a. Moreover, the entropy results change little under the same embedding dimension. There 12

Journal Pre-proof

4.1.3

Pr e-

p ro

of

are significant differences between MCRRPE and WMCRRPE if we set s < 10, and the differences diminish with the growing of the embedding dimension. Then we discuss the situation when scale s increases to 10. For m=2, the entropy results keep unchanged with the increase of the scale. Furthermore, we observe small differences between MCRRPE and WMCRRPE when s > 26. When m increases to 3, MCRRPE and WMCRRPE begin to fluctuate slightly with the increase of the scale. When s > 26, the differences between them become bigger. When m increases to 4, the growing of the embedding dimension can enlarge the fluctuations. When s > 27, we observe the entropy results declining slightly and the downtrend become clearer and clearer with the growing of the embedding dimension. In Fig. 8b, we have similar observations as in Fig. 8a. Nevertheless, there are still some differences of the observations. For sample b, MCRRPE and WMCRRPE show no difference under the same embedding dimension when we set s < 10. When m = 2, the MCRRPE value and WMCRRPE value on multiple scales are equal to CRRPE value and WCRRPE value, respectively. When m increases to 3, the entropy results show slight fluctuations. When m increases to 4, we observe differences between MCRRPE and WMCRRPE when s = 28. Meanwhile, the differences between MCRRPE and WMCRRPE when s=28 and fluctuations in the entropy curves become larger with the growing of the embedding dimension. Analysis of the variable τ

We do analysis to observe the change rules of the entropy curves with different time delay. Here we set the scale factor s = 1 and the parameter of R´enyi entropy q = 0.98. We show the MCRRPE and WMCRRPE results in the case of sample a and b with embedding dimension m from 2 to 7 and time delay τ from 1 to 20.

al

MCRRPE&WMCRRPE-Logistic

400

urn

Entropy value

300

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

200

100

0

Jo

0

5

10

15

20

Time delay

Fig. 9a. MCRRPE versus WMCRRPE results in the case of sample a.

We first discuss the MCRRPE and WMCRRPE results in the case of sample a from τ = 1 to τ = 4. As can be seen from Fig. 9a, the entropy results fluctuate greatly. There are obvious differences between MCRRPE and WMCRRPE, and the differences diminish with the increase of embedding dimension m from 2 to 7. Then we discuss the situation from τ = 5 to τ = 20. MCRRPE is equal to WMCRRPE with the embedding dimension m from 2 to 7. For m = 2, MCRRPE and WMCRRPE do not change with the time delay. When m 13

Journal Pre-proof

MCRRPE&WMCRRPE-Logistic+noise

400

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

200

of

Entropy value

300

0

0

5

p ro

100

10

15

20

Time delay

Fig. 9b. MCRRPE versus WMCRRPE results in the case of sample b.

Pr e-

increases to 3, MCRRPE and WMCRRPE begin to fluctuate slightly and the fluctuations increase as the embedding dimension m increases. In Fig. 9b, there is almost no difference between MCRRPE and WMCRRPE with the embedding dimension m from 2 to 7. When m = 2, 3, 4, the MCRRPE and WMCRRPE results remain unchanged with the increase of time delay. When m increases to 5, they begin to fluctuate and the fluctuations of them become larger with the increase of embedding dimension. By comparing Figs. 9a-9b, we observe that there are dramatic differences in WMCRRPE results between sample a and sample b, so is that for MCRRPE, which means a difference of time structure between sample a and b. Analysis of multiple variables

al

4.1.4

Jo

urn

Finally, the above findings in MCRRPE and WMCRRPE are extended to more general cases. We set time delay τ = 1 and select embedding dimension m = 7 as an example. However, it should be noted that we no longer limit the value of the scale factor s and parameter of R´enyi entropy q. The MCRRPE and WMCRRPE value are calculated in the case of sample a and b, respectively, then we plot them as a function of s and q. The MCRRPE and WMCRRPE results for sample a and sample b is shown in Figs. 10a-10b. In Figs. 10a-10b, when we determine the value of the parameter q, the curve with embedding dimension m = 7 is the intersection of the 3D surface and the plane with q equal to the fixed value. Similarly, if we determine the value of time scale s, the curve with embedding dimension m = 7 is the intersection of the 3D surface and the plane with s equal to the fixed value. Same results as before for the parameter of R´enyi entropy q or scale factor s can be obtained by the line of intersection. (see appendix A) After that, we fix time delay τ = 1, and discuss the difference between sample a and sample b for different entropy value results with embedding dimension m = 6 and m = 7. Fig. 11a present the comparison for MCRRPE results on m = 6, 7, and Fig. 11b present the comparison for WMCRRPE results on m = 6, 7. As can be seen from Fig. 11a, when q deviates from 1, the difference for MCRRPE between sample a and sample b is close to 0, which means similarity between two samples.

14

Journal Pre-proof

MCRRPE-Logistic

MCRRPE-Logistic

200

100 0 -100 -200 1.8

100 0

of

Entropy value

Entropy value

200

-100 -200 1.8

1 0.6 0.2

q

0

5

10

15

20

25

1.4

30

1

p ro

1.4

0.6

0.2

q

s

0

5

10

15

20

25

30

s

Fig. 10a. MCRRPE versus WMCRRPE results in the case of sample a with embedding dimension m = 7.

200

Entropy value

200

Entropy value

WMCRRPE-Logistic+noise

Pr e-

MCRRPE-Logistic+noise

100 0 -100 -200 1.8

100

0

-100 -200 1.8

1.4 1 0.2

0

5

10

20

25

1.4

30

1 0.6

al

0.6

q

15

q

s

0.2

0

5

10

15

20

25

30

s

urn

Fig. 10b. MCRRPE versus WMCRRPE results in the case of sample b with embedding dimension m = 7.

Jo

When q is close to 1, the difference for MCRRPE between sample a and sample b is much greater than 0. It reminds us that choosing a q value close to 1 can better observe the properties and find the difference after adding noise. Moreover, choosing the q value that deviates from 1 can have better noise immunity. When scale factor s is equal to 1, the difference is obvious for observation since most of the time series is based on multiple scales. When the scale factor s is greater than 25, the difference is also more pronounced than it in other cases. If the embedding dimension m increases from 6 to 7, the difference with the same parameter q and scale factor s increases as well. The same results for WMCRRPE are shown in Fig. 11b. The similar analysis with different parameter of R´enyi entropy q and time delay τ can be seen in appendix A, where we set the scale s = 1. For sample a, we compare the difference of MCRRPE and WMCRRPE between on m = 6, 7. Here, we calculate the difference of MCRRPE between m = 6 and m = 7 and get Fig. 12a(a). Then we get Fig. 12a(b) for WMCRRPE. Similarly, the difference of MCRRPE between m = 6 and m = 7 and the difference of WMCRRPE between m = 6 and m = 7 for 15

Journal Pre-proof

Logistic&Logistic+noise

Logistic&Logistic+noise 4.5 3

4

3.5

5 3.5

2.5

4

2

2 1.5

1.5

1 0.5

1

0 1.8 1 0.6 0.2

q

0

5

15

10

20

25

30

2.5

2 1 0 1.8

0.5

1.4 1

p ro

1.4

3

3

of

difference

difference

3 2.5

0.6

0.2

q

s

(a) m = 6

0

5

10

15

20

2 1.5 1

25

30

0.5

s

(b) m = 7

Logistic&Logistic+noise

Pr e-

Fig. 11a. The difference between sample a and sample b for MCRRPE results with different embedding dimensions. Logistic&Logistic+noise 4.5 4

3

3.5

5

2.5

2

2

1.5

1.5

1 0.5

1

0 1.8 1 0.6

q

0.2

0

5

25

30

15

10

20

3

3

2.5

2

2

1

1.5

0

1

1.8 0.5

1.4 1

al

1.4

3.5

4

difference

difference

3 2.5

0.6

q

s

0

5

15

20

25

0.5

s

(b) m = 7

urn

(a) m = 6

0.2

10

30

Fig. 11b. The difference between sample a and sample b for WMCRRPE results with different embedding dimensions.

Jo

sample b are shown in Fig. 12b. From Fig. 12a we obtain that increasing embedding dimension leads to a significant difference for sample a. When q is in the range of 0.9 to 1.1, the difference is more obvious than it in other situations. Therefore, we can not get one more stable results by enlarging the embedding dimension, which is in line with the characteristics of cumulative residual R´enyi entropy. The similar observation for sample b can be get from Fig. 12b. Then, we discuss the sample a. First, the difference between MCRRPE and WMCRRPE with embedding dimension m = 6 and m = 7 are compared. Fig. 13a(a) and Fig. 13a(b) present the comparison on m = 6, 7, respectively. It can be seen that when q value is too large or too small, the difference between MCRRPE and WMCRRPE is close to 0, which means the MCRRPE and WMCRRPE results

16

Journal Pre-proof

Logistic

1.8

Logistic

1.8

45

45

40

40

1.4

1.4

35

35 30 25

1

of

25

1

q

q

30

20 15

0.6

0.6 10

p ro

5

0.2

20 15 10 5

0.2

5

10

15

20

25

30

5

s

10

15

20

25

30

s

(a) MCRRPE

(b) WMCRRPE

Pr e-

Fig. 12a. The difference between m = 6 and m = 7 in the case of sample a. Logistic+noise

1.8

1.8

Logistic+noise 45

45

40

40

1.4

1.4

35

35 30

25

1

q

q

30

25

1

20

20

15

15

0.6

0.6

10

10

5

5

0.2

0.2 10

15

s

20

25

30

5

al

5

10

15

20

25

30

s

(a) MCRRPE

(b) WMCRRPE

urn

Fig. 12b. The difference between m = 6 and m = 7 in the case of sample b.

Jo

are more similar. When q is close to 1, the difference between MCRRPE and WMCRRPE is much greater than 0, which reminds us to choose the q value close to 1 for the purpose of better showing the strengths of WMCRRPE. Here, we discuss the situation when q is close to 1. It can be seen from Fig. 13a that when the scale factor s is equal to 1, 2, 11, 17, 29, 30, the difference between MCRPPE and WMCRRPE is more obvious than it when the scale factor is equal to other values. In the previous analysis, we observed from Fig. 8a that WMCRRPE and MCRRPE have a large difference when the scale factor s < 3. Besides, the difference decreases as the embedding dimension m increases. The similar results can be seen in Fig. 13a. When the embedding dimension m increases from 6 to 7, we find that the difference between WMCRRPE and MCRRPE becomes smaller. Now, we analyze the sample b. Similarly, the difference between MCRRPE and WMCRRPE on m = 6, 7 are compared, respectively. Fig. 13b present the results of the comparison.

17

Journal Pre-proof

Logistic

1.8

Logistic

1.8

0.8

0.8

0.7

0.7

1.4

1.4

0.6

0.6

0.5

0.4

1

0.3

0.6

0.6

0.2

p ro

0.1

0.2

0.4

of

q

q

0.5

1

0.3 0.2 0.1

0.2

5

10

15

20

25

30

5

s

10

15

20

25

30

s

(a) m = 6

(b) m = 7

Pr e-

Fig. 13a. The difference between MCRRPE and WMCRRPE in the case of sample a. Logistic+noise

1.8

1.8

Logistic+noise 1.1

1.2

1

1.4

1 0.9

1.4

0.8

1

q

q

0.8

0.6

0.7 0.6

1

0.5 0.4

0.4

0.6

0.3

0.6

0.2

0.2

0.1

0.2

0.2 10

15

s

20

25

30

5

al

5

10

15

20

25

30

s

(a) m = 6

(b) m = 7

urn

Fig. 13b. The difference between MCRRPE and WMCRRPE in the case of sample b.

Jo

Here we see similar situation that when q deviates from 1, the difference between WMCRRPE and MCRRPE for sample b is close to 0. However, when q is close to 1, the difference between WMCRRPE and MCRRPE for sample b is much greater than 0. Therefore, we study the situation when q is close to 1. With the increase of the embedding dimension, the fluctuation of entropy curve increases while the difference between WMCRRPE and MCRRPE decreases. In addition, we make a comparison through Figs. 13a-13b. The difference for sample a is more obvious than it for sample b in the situation of the same factor s, parameter of R´enyi entropy q and embedding dimension m. When m becomes larger, the difference is weakened. When embedding dimension m is chosen to be less than 6, we observe the same change rules. Therefore, we suppose choosing a lower embedding dimension is more appropriate for the observation of the difference before and after weighting. Now, we generalize the property of the difference between WMCRRPE and MCRRPE

18

Journal Pre-proof

Logistic

1.8

1.4

q

1

q

1.2

1.4

1.2

1 0.8 0.6

1 0.8

1

0.6 0.4

0.6

0.4

Pr e-

0.2

0.2

1.4

p ro

1.6

0.6

Logistic

1.8

1.8

1.4

of

for sample a and sample b. Here we set scale s = 1 and choose the MCRRPE and WMCRRPE results with embedding dimension m = 6, 7 as an example. The parameter of R´enyi entropy q and time delay τ are no longer restricted. The MCRRPE and WMCRRPE results are calculated for sample a and b, respectively, and then we plot them as a function of the time delay τ and parameter q. The MCRRPE and WMCRRPE results of sample a and b is shown in Figs. 14a-14b.

0.2

0.2

4

8

12

16

time delay

(a) m = 6

20

4

8

12

16

20

time delay

(b) m = 7

Fig. 14a. The difference between MCRRPE and WMCRRPE in the case of sample a.

Logistic+noise

1.8

al

1.8

Logistic+noise 0.25

0.2

1.4

1.4 0.2

0.6

0.2

4

8

12

16

q

1

urn

q

0.15

0.15

1

0.1 0.1

0.6 0.05

0.05

0.2 20

4

time delay

8

12

16

20

time delay

(b) m = 7

Jo

(a) m = 6

Fig. 14b. The difference between MCRRPE and WMCRRPE in the case of sample b. From Fig. 14a, we can see that the main difference is distributed around the line q = 1. Therefore, we discuss the situation when q is in the vicinity of 1. When the time delay τ is large, the difference between MCRRPE and WMCRRPE is close to 0, which means the MCRRPE and WMCRRPE are more similar. When time delay τ is small, the difference between MCRRPE and WMCRRPE is much greater than 0. However, we observe opposite results in Fig. 14b for sample b. When the time delay τ is large, the difference between 19

Journal Pre-proof

4.2

Pr e-

p ro

of

MCRRPE and WMCRRPE is much greater than 0, while time delay is small, the difference between MCRRPE and WMCRRPE is close to 0. By making a comparison through Figs. 14a-14b, we find a large difference between sample a and b, which is caused by the added noise. For sample a, the difference between MCRRPE and WMCRRPE is weakened by the increase of the embedding dimension. However, the difference between MCRRPE and WMCRRPE for sample b increases as the embedding dimension increases. In addition, we can choose the value of q near 1 and observe the difference between MCRRPE and WMCRRPE as time delay changes. In this way, we can detect whether there is a noise interference in the data. Similarly, we can also choose the q value away from 1. In this way, we can avoid noise interference in the data. Above all, we learn that the MCRRPE and WMCRRPE methods include the MPE and WMPE methods as a special case. Compared with the traditional methods, the proposed methods have the advantage of flexible q. When the q value changes, the entropy value varies greatly, which is easy to observe. On the other hand, the influence of the change of scale on the entropy value is obviously weakened, and the noise in the data can also be detected sensitively. It reminds us that when we use the MPE and WMPE methods, or the MRPE and WMRPE methods, to investigate properties of time series, the MCRRPE and WMCRRPE methods are appropriate choices if the results are not ideal, where an optimal q and cumulative residual entropy offer a better observation.

MCRRPE and WMCRRPE analysis among US, European and Chinese stock markets

Jo

urn

al

In this subsection, we investigate the nine stock markets from US, Europe and China applying MCRRPE and WMCRRPE. The analysis of variable q among US, European and Chinese stock markets can be seen in appendix B. From the analysis of the variable q, we find that it is difficult to observe the differences between the nine stock markets. For a better observation, we start from the specific information about the difference between MCRRPE and WMCRRPE for the nine markets and see if there are similarities and differences. Based on previous analysis of the variables, we detect the scale s = 5 and time delay τ = 5 as relatively suitable choices. The difference results of US markets are shown in Figs. 15-17, as well as European markets in Figs. 18-20, and Chinese markets in 21-23. First, we discuss the special case when q = 1. It can be seen that MCRRPE and WMCRRPE are degenerated to MCRPE and WMCRPE, respectively. If we fix embedding dimension m = 2, MCRPE is equal to WMCRPE. When m increases to 3, a difference begins between the MCRPE and WMCRPE values. As the embedding dimension m increases, the difference between MCRPE and WMCRPE increases accordingly. Then we study the other cases of q in Figs. 15-23. When q belongs to the [0.2, 0.4] interval, WMCRRPE is almost equal to MCRRPE. In addition, the curves of the difference in the entropy values of MCRRPE and WMCRRPE almost coincide with each other on various embedding dimensions. When q increases to 0.6, the difference between MCRRPE and WMCRRPE has a clear upward trend. Besides, the curves with different embedding dimensions are separated from each other. The closer q is to 1, the greater the difference between MCRRPE and WMCRRPE

20

Journal Pre-proof

S&P500

8

m=2 m=3 m=4 m=5 m=6 m=7

6

2 0

of

difference

4

-2 -4

-8 0.2

0.4

0.6

0.8

1

q

p ro

-6 1.2

1.4

1.6

1.8

Pr e-

Fig. 15. The difference of entropy value between MCRRPE and WMCRRPE in the case of S&P500.

DJI

5 4 3

difference

2 1 0 -1 -2 -3

m=2 m=3 m=4 m=5 m=6 m=7

-4 0.4

0.6

0.8

al

-5 0.2

1

1.2

1.4

1.6

1.8

q

urn

Fig. 16. The difference of entropy value between MCRRPE and WMCRRPE in the case of DJI.

NASDAQ

3

m=2 m=3 m=4 m=5 m=6 m=7

1

Jo

difference

2

0

-1 -2

-3 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

Fig. 17. The difference of entropy value between MCRRPE and WMCRRPE in the case of NASDAQ.

21

Journal Pre-proof

CAC40

15

m=2 m=3 m=4 m=5 m=6 m=7

10

difference

5

of

0 -5

-15 0.2

0.4

0.6

0.8

1

q

p ro

-10

1.2

1.4

1.6

1.8

Pr e-

Fig. 18. The difference of entropy value between MCRRPE and WMCRRPE in the case of CAC40.

DAX30

5 4 3

difference

2 1 0 -1 -2 -3

m=2 m=3 m=4 m=5 m=6 m=7

-4 0.4

0.6

0.8

al

-5 0.2

1

1.2

1.4

1.6

1.8

q

urn

Fig. 19. The difference of entropy value between MCRRPE and WMCRRPE in the case of DAX30.

FTSE100

8

m=2 m=3 m=4 m=5 m=6 m=7

6

2

Jo

difference

4

0

-2 -4 -6

-8 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

Fig. 20. The difference of entropy value between MCRRPE and WMCRRPE in the case of FTSE100.

22

Journal Pre-proof

SSE

4

m=2 m=3 m=4 m=5 m=6 m=7

3

1 0

of

difference

2

-1 -2

-4 0.2

0.4

0.6

0.8

1

q

p ro

-3 1.2

1.4

1.6

1.8

Pr e-

Fig. 21. The difference of entropy value between MCRRPE and WMCRRPE in the case of SSE.

SZSE

6

difference

4 2 0 -2 -4

0.4

0.6

0.8

al

-6 0.2

1

1.2

1.4

m=2 m=3 m=4 m=5 m=6 m=7

1.6

1.8

q

urn

Fig. 22. The difference of entropy value between MCRRPE and WMCRRPE in the case of SZSE.

HSI

5

m=2 m=3 m=4 m=5 m=6 m=7

4 3

1

Jo

difference

2

0

-1 -2 -3 -4

-5 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

Fig. 23. The difference of entropy value between MCRRPE and WMCRRPE in the case of HSI.

23

Journal Pre-proof

Jo

urn

al

Pr e-

p ro

of

is. When q is close to 0.2, the curve of the difference in entropy values on various embedding dimensions converges to 0. At this point, it is difficult to observe the difference between MCRRPE and WMCRRPE. When q is greater than 1.4, the difference curve tends to be flat, and the rate of change is approximately equal to zero. It should be noted that the difference on various embedding dimensions does not tend to be equal to each other, which case is different from the case where the value q tends to 0.2. When q is greater than 1, the relationship between the curves of the difference between MCRRPE and WMCRRPE is similar to the situation when q is less than 1.Therefore, we only discuss the variation of the entropy value in the [0, 1] interval of q. In Figs. 15-23, we observe that the difference between MCRRPE and WMCRRPE increases with the increase of embedding dimension except for NASDAQ. In Figs. 15-17, the difference between MCRRPE and WMCRRPE of each US market on m = 2 begins when q increases to 0.6. The difference between MCRRPE and WMCRRPE on m=3 is gradually higher than it on m = 2 and start to separate from the difference on m = 2 with the increase of parameter q. When m increases to 4, the difference between MCRRPE and WMCRRPE increases at a higher pace than it on m = 3. The curves of the difference on m = 5, 6, 7 almost coincide with each other, while the difference on m = 4 is close to the difference on m = 5, 6, 7. Then we discuss the special case NASDAQ separately. When q increases to 0.4, the differences between different embedding dimensions begin to increase significantly. Except for the case of the difference on m=2, the difference between MCRRPE and WMCRRPE decreases as the embedding dimension increases. The difference between MCRRPE and WMCRRPE on m = 2 is greater than the difference on m = 5, and less than the difference on m = 4. It can be seen that the difference curves of MCRRPE and WMCRRPE at m = 2, 4, 5 are close to each other. In Figs. 18-20, the difference between MCRRPE and WMCRRPE of each European market on m = 2 begins when q increases to 0.6. The difference between MCRRPE and WMCRRPE on m = 3 increases at a higher rate than those on other embedding dimensions of European markets. Therefore, the separation between the difference on m = 2 and m = 3 is more obvious for observation. In the case of CAC40 and FTSE100, the curves of the difference between MCRRPE and WMCRRPE on m = 4, 5, 6, 7 coincide with each other while the difference between MCRRPE and WMCRRPE on m = 3 is close to those on m = 4, 5, 6, 7. Differently, the curves of the difference between WMCRRPE and MCRRPE on m = 3, 4, 5, 6, 7 coincide with each other in the case of DAX30. As can be seen from Figs. 21-23, the difference between MCRRPE and WMCRRPE in the case of SSE and SZSE differs from the difference in other cases. When the embedding dimension m = 2, the MCRRPE of SSE and SZSE is greater than WMCRRPE of SSE and SZSE. When the embedding dimension m increases to 3, the WMCRRPE of SSE and SZSE is greater than MCRRPE of SSE and SZSE, which shows that the weighting mode increases the entropy value in the case of SSE and SZSE. Furthermore, the difference on m = 4 increases at a much higher pace than that on m = 3, and the difference curves on m = 4, 5, 6, 7 are close to each other. Overall, the curves of the difference between MCRRPE and WMCRRPE of SSE and SZSE on various embedding dimensions are more likely to show a progressive change progress than those of US and European stock markets. The results of HSI is different from the results of the other two Chinese markets. When m is increased to 3, the difference between MCRRPE and WMCRRPE increases at a high rate and starts to 24

Journal Pre-proof

4.3

Pr e-

p ro

of

differ from the difference between MCRRPE and WMCRRPE on m = 2 when q increases to 0.4. In addition, we observe that the difference curves on m = 3, 4, 5, 6, 7 almost coincide with each other. Besides, the difference on m=4 increases at a higher pace than that on m=3, and the curves of the difference on m=4,5,6,7 are almost coincided with each other. Overall, the curves of the difference between MCRRPE and WMCRRPE of SSE and SZSE on various embedding dimensions are closer to each other than those of US and European markets. The results of HSI is different from the results of the other two Chinese markets. The difference between MCRRPE and WMCRRPE on m=3 increases at a high rate and start to separate from the difference on m=2 when q increases to 0.6. In addition, the curve of the difference between MCRRPE and WMCRRPE on m=3 is very close to the curve of the difference between MCRRPE and WMCRRPE on m=4. We observe that the curves of the difference on m=4,5,6,7 almost coincide with each other. Through the observation of Figs. 15-23, it can be concluded that the difference between MCRRPE and WMCRRPE of S&P500 and DJI are more similar to each other among the US stock markets, the difference between MCRRPE and WMCRRPE of CAC40 and FTSE100 are more similar to each other among the European stock markets, and the difference between MCRRPE and WMCRRPE of SSE and SZSE are more similar to each other among the Chinese stock markets. Although HSI is an index of the Chinese stock market, the difference between MCRRPE and WMCRRPE is similar to DJI, which is an index of the US stock market.

Comparison among US, European, and Chinese stock markets

Jo

urn

al

We find that the magnitude of the entropy does not increase or decrease significantly with the change of the scale. However, as embedding dimension m increases, the fluctuation of the curves of entropy values increases. In order to analyze the relation of the complexity among US, European and Chinese stock markets, here we choose the embedding dimension m = 6, 7 to compare MCRRPE and WMCRRPE results. It should be noted that we set the R´enyi entropy parameter q = 0.2, and time delay τ = 5 for the purpose of enhancing the noise immunity. Fig. 24 shows the comparison of MCRRPE and WMCRRPE among US, European and Chinese stock markets when m = 6, respectively. Similarly, the comparison of MCRRPE and WMCRRPE among US, European and Chinese stock markets on m = 7 is shown in Fig. 25. In addition, with fixed q = 0.2 and τ = 5, we provide the statistical data: mean value, standard deviation, minimum and maximum of the MCRRPE and WMCRRPE values of US, European and Chinese stock markets on embedding dimension m = 6 in Table 1 and m = 7 in Table 2, respectively. Some properties can be found from the comparison:

25

Journal Pre-proof

S&P500 DJI NASDAQ FTSE100 DAX30 CAC40 SSE SZSE HSI

8.04 8.02

8

p ro

8 7.98

7.95 0

7.96 0

5

10

15

20

25

30

5

10

15

20

25

Scale

Scale

S&P500 DJI NASDAQ FTSE100 DAX30 CAC40 SSE SZSE HSI

of

8.05

Entropy value

Entropy value

MCRRPE-USA&Europe&China

MCRRPE-USA&Europe&China

8.06

30

1

2

3

4

5

6

7

8

9

10

Pr e-

Fig. 24a. MCRRPE results of US, European and Chinese stock markets on embedding dimension m = 6 when q = 0.2 and τ = 5. S&P500 DJI NASDAQ FTSE100 DAX30 CAC40 SSE SZSE HSI

S&P500 DJI NASDAQ FTSE100 DAX30 CAC40 SSE SZSE HSI

8.05

8

7.95 0

5

10

15

20

25

urn

Scale

Entropy value

8.1

al

Entropy value

WMCRRPE-USA&Europe&China

WMCRRPE-USA&Europe&China

8.1

8.05 8 7.95 0

30

5

10

15

20

Scale

25

30

1

2

3

4

5

6

7

8

9

10

Fig. 24b. WMCRRPE results of US, European and Chinese stock markets on embedding dimension m = 6 when q = 0.2 and τ = 5. Table 1: The statistical of MCRRPE and WMCRRPE of nine stock markets on m = 6, q = 0.2, τ = 5. S&P500 8.0061 0.0145 7.9722 8.0428 8.0157 0.0132 7.9890 8.0375

Jo

MCRRPE

mean std min max WMCRRPE mean std min max

DJI NASDAQ 8.0041 8.0049 0.0139 0.0098 7.9723 7.9874 8.0356 8.0259 8.0129 8.0233 0.0142 0.0144 7.9895 8.0005 8.0492 8.0519

FTSE100 8.0031 0.0135 7.9776 8.0279 8.0108 0.0117 7.9894 8.0334

26

DAX30 CAC40 SSE SZSE 8.0055 8.0055 8.0135 8.0127 0.0142 0.0163 0.0136 0.0135 7.9851 7.9700 7.9924 7.9921 8.0371 8.0411 8.0434 8.0428 8.0167 8.0133 7.9981 7.9969 0.0173 0.0177 0.0136 0.0124 7.9935 7.9683 7.9539 7.9531 8.0613 8.0570 8.0249 8.0136

HSI 8.0026 0.0112 7.9722 8.0207 8.0029 0.0131 7.9699 8.0287

Journal Pre-proof

S&P500 DJI NASDAQ FTSE100 DAX30 CAC40 SSE SZSE HSI

10.46

10.46 10.44 10.42

p ro

10.44 10.42

10.4 0

10.4 0

5

10

15

20

25

30

5

10

15

20

25

Scale

Scale

S&P500 DJI NASDAQ FTSE100 DAX30 CAC40 SSE SZSE HSI

of

10.48

Entropy value

Entropy value

MCRRPE-USA&Europe&China

MCRRPE-USA&Europe&China

10.48

30

1

2

3

4

5

6

7

8

9

10

Pr e-

Fig. 25a. MCRRPE results of US, European and Chinese stock markets on embedding dimension m = 7 when q = 0.2 and τ = 5. S&P500 DJI NASDAQ FTSE100 DAX30 CAC40 SSE SZSE HSI

S&P500 DJI NASDAQ FTSE100 DAX30 CAC40 SSE SZSE HSI

10.45

10.4

10.35 0

5

10

15

Entropy value

10.5

al

Entropy value

WMCRRPE-USA&Europe&China

WMCRRPE-USA&Europe&China

10.5

20

25

10.45

10.4 10.35 0

30

10

15

20

Scale

Scale

urn

5

25

30

1

2

3

4

5

6

7

8

9

10

Fig. 25b. WMCRRPE results of US, European and Chinese stock markets on embedding dimension m = 7 when q = 0.2 and τ = 5. Table 2: The statistical of MCRRPE and WMCRRPE of nine stock markets on m = 7, q = 0.2, τ = 5. S&P500 10.4418 0.0133 10.4144 10.4711 10.4506 0.0130 10.4255 10.4732

Jo

MCRRPE

mean std min max WMCRRPE mean std min max

DJI NASDAQ 10.4386 10.4406 0.0139 0.0108 10.4160 10.4212 10.4731 10.4580 10.4475 10.4587 0.0138 0.0153 10.4254 10.4355 10.4822 10.4933

FTSE100 10.4363 0.0135 10.4154 10.4645 10.4432 0.0119 10.4217 10.4639

DAX30 10.4391 0.0148 10.4108 10.4777 10.4519 0.0171 10.4299 10.4995

CAC40 10.4384 0.0154 10.4116 10.4726 10.4472 0.0165 10.4135 10.4881

SSE 10.4442 0.0131 10.4256 10.4785 10.4279 0.0116 10.3914 10.4566

SZSE 10.4431 0.0137 10.4254 10.4782 10.4255 0.0130 10.3871 10.4417

HSI 10.4365 0.0112 10.4011 10.4529 10.4348 0.0144 10.4010 10.4601

1. For m = 6, the MCRRPE results of the nine stock markets alternate between increase and decrease, and fluctuate cross over each other in the range of [7.97, 8.05], where 27

Journal Pre-proof

p ro

of

the MCRRPE results of CAC40 fluctuate the most. The WMCRRPE results are different from the MCRRPE results. When the scale is less than 13, the WMCRRPE results of nine markets fluctuates around 8.01. When the scale is greater than 13, the WMCRRPE results of NASDAQ has an upward trend and fluctuate in the range of [8.00, 8.06], while the WMCRRPE results of SSE and SZSE has a downward trend and fluctuate in the range of [7.95, 8.03]. The WMCRRPE results of S&P500 and DJI fluctuate in the range of [7.98, 8.05], the WMCRRPE results of DAX30 and CAC40 fluctuate violently in the range of [7.96, 8.07]. Besides, the WMCRRPE results of FTSE100 fluctuates in the range of [7.98, 8.04], and the WMCRRPE results of HSI fluctuates in the range of [7.96, 8.03].

Pr e-

2. For m = 6, the mean values of the MCRRPE results of the nine stock indexes are approximately 8.005. However, the mean values of the WMCRRPE results are different. The mean value of the WMCRRPE results of NASDAQ are the highest, which is around 8.023, while the mean value of the WMCRRPE results of SZSE is the lowest, which is around 8.000. Besides, the mean value of the WMCRRPE results of S&P500, DJI, DAX30 and CAC40 is approximately 8.015, and the mean value of the WMCRRPE results of FTSE100, SSE and HSI is 8.000.

urn

al

3. For m = 7, the MCRRPE results of the nine stock markets fluctuate cross over each other in the range of [10.40, 10.48], and the fluctuations are more volatile than those of the MCRRPE results on m = 6. However, the WMCRRPE results are different. When the scale is less than 13, the WMCRRPE results of the nine markets fluctuate more concentrated around 10.44. The upward trend of the WMCRRPE results of NASDAQ is more obvious than that on m = 6 and fluctuates within the range of [10.43, 10.50]. The downward trend of the WMCRRPE results of SSE and SZSE is more obvious than that on m = 6 and fluctuates within the range of [10.38, 10.46]. The WMCRRPE results of S&P500 and DJI fluctuate in the range of [10.42, 10.49], the WMCRRPE results of DAX30 and CAC40 fluctuate in the range of [10.41, 10.50]. Besides, the WMCRRPE results of FTSE100 fluctuates in the range of [10.42, 10.47], and the WMCRRPE results of HSI was in [10.40, 10.46] internal fluctuations.

Jo

4. For m = 7, the mean values of the MCRRPE results of the nine stock indexes are approximately 10.440. However, the mean values of the WMCRRPE results are different. The mean value of the WMCRRPE results of NASDAQ is the highest, which is about 10.459, while the mean value of the WMCRRPE results of SZSE is the lowest, which is about 10.426. In addition, the mean value of the WMCRRPE results of S&P500, DJI, DAX30 and CAC40 is approximately 10.450, the mean value of the WMCRRPE results of SSE and HSI is approximately 10.430, while the mean value of the WMCRRPE results of FTSE100 is 10.443. 5. For m = 6 and m = 7, the mean values of the WMCRRPE results of the Chinese mainland market is lower than those of the WMCRRPE results of US market, the European market and HSI, while the mean value of the WMCRRPE results of NASDAQ is higher than those of the WMCRRPE results of the other stock markets.

28

Journal Pre-proof

p ro

of

6. As the scale increases, the WMCRRPE results of SSE and SZSE have a downward trend. The trend becomes more obvious as the embedding dimension increases. This means that the Chinese mainland stock market is based on multiple scales. When we study the Chinese mainland stock market, the historical data will have greater reference value than that of the US and European stock markets. Conversely, the WMCRRPE results of NASDAQ have an upward trend as the scale increases. The trend is more pronounced as the embedding dimension increases. It reminds us that the NASDAQ sequence is non-stationary, and a small scale should be chosen to avoid distortion in the process of analysis.

Conclusion

urn

5

al

Pr e-

These characteristics lead to some conclusions. The MCRRPE method does not distinguish well between stock markets in different areas, and we cannot accurately distinguish them from the figures. The WMCRRPE method can distinguish between the US, European and Chinese stock markets. The nine stock markets can be broadly divided into four categories: the first category is NASDAQ and FTSE100, the second category is S&P500, DJI and HSI, the third category is DAX30 and CAC40, and the fourth category is SSE and SZSE. Through analysis, it is found that the WMCRRPE results curve of NASDAQ has a clear upward trend. Conversely, the WMCRRPE results curve of SSE has a clear downward trend. Due to the particularity of the social structure and political background of Hong Kong, HSI is different from the stock markets in Mainland China, but is more similar to the stock markets in the US and Europe. Due to the adjustment of the protection policies in Mainland China, it is obvious that the WMCRRPE results of SSE and SZSE are lower than the other seven stock markets. Although it can be seen from the Table 1 and 2 that the WMCRRPE method increases the standard deviation, we can find that the trend of the nine stock markets with scale is more obvious, that is, it can distinguish these financial stocks very well.

Jo

In this paper, we extend MRPE and WMRPE to MCRRPE and WMCRRPE, and the added cumulative residual entropy provides a new perspective for the study of time series. It can be used to detect noise in signals, measure financial risks, and the results are more stable. We first apply the MCRRPE and WMCRRPE methods to the synthetic data. Through experiments, we found that MCRRPE and WMCRRPE are almost unaffected by the scale changes, and it is proved that choosing an appropriate q can properly amplify the difference in entropy between MCRRPE and WMCRRPE with various embedding dimensions. In addition, the MCRRPE and WMCRRPE of the signal added with noise are significantly different under large time delays. Then we apply the MCRRPE and WMCRRPE methods to the US, European and Chinese stock markets and compared them. The MCRRPE and WMCRRPE results for different markets show similarities when applied to various embedding dimensions with variable q. By observing the curve of WMCRRPE and analyzing the statistics, we can divide the stock market into four groups: (1) NASDAQ and FTSE100, (2) S&P500, DJI and HSI, (3) DAX40 and CAC40 and (4) SSE and SZSE. Among them, the curve of the WMCRRPE results of NASDAQ has a clear upward trend, and the curve of the 29

Journal Pre-proof

Pr e-

p ro

of

WMCRRPE results of SSE has a significant downward trend. Among them, the WMCRRPE results of SSE and SZSE are lower than the other seven stock markets, which may be due to the adjustment of stock markets in Mainland China by policies and mechanisms. The particularity of HSI makes it more similar to the stock markets in Europe and America. For comparison, we choose MCRRPE and WMCRRPE based on q = 0.2, τ = 5 and m = 6, 7 to highlight the validity of the results. The WMCRRPE method moves the range of entropy fluctuations of different markets to varying degrees, and better distinguishes the complexity of different stock markets. We find that the stock markets except for the Chinese mainland stock markets with large standard deviations have larger WMCRRPE results; the stock markets except for the Chinese mainland stock markets with small standard deviations have smaller corresponding WMCRRPE results. That is, the standard deviation has a certain relationship with WMCRRPE. Compared with MCRRPE, WMCRRPE can better measure the investment risk of financial markets. On the other hand, it can distinguish these financial stock markets well, reflecting the advantages of adding weights. The proposed WMCRRPE method can quantify the complexity of the signal and measure the uncertainty of the signals.

Acknowledgments

The financial support from the funds of the Fundamental Research Funds for the Central Universities (2019YJS202) is gratefully acknowledged.

al

Appendix A Some other cases of analysis on synthetic data

urn

Figs. 26a-27b show the differences in absolute MCRRPE results with q in two intervals for sample a and sample b on m = 2 and m = 7. Figs. 28a-28b show the MCRRPE and WMCRRPE results for sample a and sample b on m = 6 without the limitation of the values of scale s and parameter of R´enyi entropy q. Fig. 29 shows the difference between sample a and sample b for MCRRPE and WMCRRPE with the scale s = 1 on m = 6, 7. Logistic

Jo

difference

20

20

0.4165 18 16 14

15

12

10

10

5

8

0 1

6

1.2

1 0.8

1.4

0.6

1.6

q>1

0.416

difference

25

0.2

0.4155 0.415 0.4145

4

0.414

2

0.4135

0.4 1.8

Logistic

0.417

22

1

q<1

4

7 10 13 16 19 22 25 28 31 34 37 40

q

Fig. 26a. The difference of absolute MCRRPE results in the case of sample a on m = 2.

30

Journal Pre-proof

Logistic+noise

20

Logistic+noise

0.471

14

15

12

10

10

0.47

8

5

0.469

0.468

of

16

20

difference

0.467 6

0 1

4

1.2

0.466

1 0.8

1.4

0.6

1.6

0.465

0.4 1.8

q>1

2

0.2

1

q<1

4

7

10 13 16 19 22 25 28 31 34 37 40

p ro

difference

18

25

q

Fig. 26b. The difference of absolute MCRRPE results in the case of sample b on m = 2.

Logistic

Pr e-

350

1

400

300

0.99

300

250

200

difference

difference

Logistic

1.01

200

100

150

0 1

1

0.8

1.4

0.6

1.6

0.4 1.8

q>1

0.2

q<1

0.97 0.96

100

1.2

0.98

0.95

50

0.94

1

4

7

10 13 16 19 22 25 28 31 34 37 40

q

al

Fig. 27a. The difference of absolute MCRRPE results in the case of sample a on m = 7.

350

1.06

difference

400

300

1.05

300

250

200

200

100

150

0 1

100

Jo

1.2

0.8

1.4

1.8

1

1.03

1.01

50

1

0.4 0.2

1.04

1.02

0.6

1.6

q>1

Logistic+noise

1.07

difference

urn

Logistic+noise

1

q<1

4

7

10 13 16 19 22 25 28 31 34 37 40

q

Fig. 27b. The difference of absolute MCRRPE results in the case of sample b on m = 7.

31

Journal Pre-proof

MCRRPE-Logistic

150

150

100

100

50 0 -50 -100 -150 1.8

50 0 -50 -100 -150 1.8

1.4 1 0.2

q

0

5

10

15

20

25

1.4 1

0.6

q

s

0.2

0

5

p ro

0.6

30

of

Entropy value

Entropy value

MCRRPE-Logistic

10

15

20

25

30

s

Fig. 28a. MCRRPE versus WMCRRPE results in the case of sample a with embedding dimension m = 6.

150

100

Entropy value

Entropy value

150

WMCRRPE-Logistic+noise

Pr e-

MCRRPE-Logistic+noise

50 0 -50 -100 -150 1.8

100

50

0

-50

-100 -150 1.8

1.4 1 0.6 0.2

q

0

5

10

s

15

20

25

1.4

30

1

0.6 0.2

q

0

5

10

15

20

25

30

s

al

Fig. 28b. MCRRPE versus WMCRRPE results in the case of sample b with embedding dimension m = 6.

3 2 1 0

1.8

1.4

1

Jo

0.6

q

0.2

0

4

8

12

16

time delay

(a) m = 6

Logistic&Logistic+noise 4.5

3

4

5 2.5

2

1.5

1

20

0.5

3.5

entropy value

entropy value

4

urn

Logistic&Logistic+noise

4 3

3 2.5

2 2

1 1.5

0 1

1.8 1.4 1 0.6

q

0.2

0

4

8

12

16

20

0.5

time delay

(b) m = 7

Fig. 29a. The difference between sample a and sample b for MCRRPE results with scale s = 1.

Appendix B The analysis of variable q among US, European and Chinese stock markets We choose various embedding dimension m from 2 to 7. Since the figures about the 32 markets on the different scale or time delay MCRRPE and WMCRRPE results of US stock

Journal Pre-proof

Logistic&Logistic+noise

Logistic&Logistic+noise 4.5 4

3

4

5

2

2

1.5

1

1

0 1.8 1.4 1

q

0.2

0

4

12

8

20

3

3 2.5

2 2

1 0 1.8

0.5

1.4 1 0.6

0.2

q

time delay

0

4

8

12

16

20

1.5 1 0.5

time delay

p ro

0.6

16

4

of

entropy value

entropy value

3.5 2.5

3

(a) m = 6

(b) m = 7

Fig. 29b. The difference between sample a and sample b for WMCRRPE results with scale s = 1.

200

MCRRPE&WMCRRPE-S&P500 MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

urn

150

al

Pr e-

are similar, we decide to analyze the MCRRPE and WMCRRPE results of US markets with various embedding dimension m when the scale s = 5 and time delay τ = 5 as their representatives. So as the European and Chinese markets. Then we provide the MCRRPE and WMCRRPE results of US markets in Figs. 30-32, as well as European markets in Figs. 33-35, and Chinese markets in Figs. 36-38. From Figs. 30-38, we observe results similar to Fig. 4a and Fig. 4b. WMCRRPE is almost equal to MCRRPE. The closer q is to 1, the closer the entropy value is to infinity. The absolute entropy value increases with the increase of m, where a higher embedding dimension means more uncertainty. In addition, a higher embedding dimension m leads to a wider range of entropy values in the same interval of parameter q.

entropy value

100 50 0

-50

-100

Jo

-150

-200 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

Fig. 30. MCRRPE and WMCRRPE results of S&P500 with various embedding dimension m when s = 5 and τ = 5.

33

Journal Pre-proof

MCRRPE&WMCRRPE-DJI

200

of

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

150

50 0 -50 -100 -150 -200 0.2

0.4

0.6

0.8

1

1.2

p ro

entropy value

100

1.4

q

1.6

1.8

MCRRPE&WMCRRPE-NASDAQ MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

urn

200

al

Pr e-

Fig. 31. MCRRPE and WMCRRPE results of DJI with various embedding dimension m when s = 5 and τ = 5.

150

entropy value

100 50 0

-50

-100

Jo

-150

-200 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

Fig. 32. MCRRPE and WMCRRPE results of NASDAQ with various embedding dimension m when s = 5 and τ = 5.

34

Journal Pre-proof

MCRRPE&WMCRRPE-CAC40

200

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

150

50 0

of

entropy value

100

-50 -100

-200 0.2

0.4

0.6

0.8

1

1.2

p ro

-150 1.4

q

1.6

1.8

Fig. 33. MCRRPE and WMCRRPE results of CAC40 with various embedding dimension m when s = 5 and τ = 5.

Pr e-

MCRRPE&WMCRRPE-DAX30

200 150

entropy value

100 50 0 -50 -100 -150 0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

al

-200 0.2

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

urn

Fig. 34. MCRRPE and WMCRRPE results of DAX30 with various embedding dimension m when s = 5 and τ = 5.

MCRRPE&WMCRRPE-FTSE100

200

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

150

50 0

Jo

entropy value

100

-50

-100 -150

-200 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

Fig. 35. MCRRPE and WMCRRPE results of FTSE100 with various embedding dimension m when s = 5 and τ = 5.

35

Journal Pre-proof

MCRRPE&WMCRRPE-SSE

200

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

150

50 0

of

entropy value

100

-50 -100

-200 0.2

0.4

0.6

0.8

1

1.2

p ro

-150 1.4

q

1.6

1.8

Fig. 36. MCRRPE and WMCRRPE results of SSE with various embedding dimension m when s = 5 and τ = 5. MCRRPE&WMCRRPE-SZSE

Pr e-

200 150

entropy value

100 50 0 -50 -100 -150 -200 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

1.8

al

q

urn

Fig. 37. MCRRPE and WMCRRPE results of SZSE with various embedding dimension m when s = 5 and τ = 5.

200

MCRRPE&WMCRRPE-HSI MCRRPE m=2 MCRRPE m=3 MCRRPE m=4 MCRRPE m=5 MCRRPE m=6 MCRRPE m=7 WMCRRPE m=2 WMCRRPE m=3 WMCRRPE m=4 WMCRRPE m=5 WMCRRPE m=6 WMCRRPE m=7

150

50 0

Jo

entropy value

100

-50

-100 -150

-200 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

q

Fig. 38. MCRRPE and WMCRRPE results of HSI with various embedding dimension m when s = 5 and τ = 5.

36

Journal Pre-proof

References [1] G.U. Yule, On a method of investigating periodicities in disturbed series, with special reference to Wolfer’s sunspot numbers, Philos. Trans. R. Soc. London, Ser. A 226 (1927) 267-298.

of

[2] G.T. Walker, On periodicity in series of related terms, Proc. R. Soc. London, Ser. A 131 (1931) 518-532.

p ro

[3] G.E.P. Box, G.M. Jenkins, G.C. Reinsel, G.M. Ljung, Time series analysis: forecasting and control, John Wiley & Sons 2015. [4] R.F. Engle, Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation, Econometrica 50 (1982) 987-1008.

Pr e-

[5] T. Bollerslev, Generalized autoregressive conditional heteroskedasticity, J. Econom 31 (1986) 307-327. [6] R. Engle, D.M. Lilien, R.P. Robins, Estimating time varying risk premia in the term structure: the Arch-M model, Econometrica 55 (1987) 391-407. [7] D.B. Nelson, Conditional heteroskedasticity in asset returns: a new Approach, Econometrica 59 (1991) 347-370. [8] R.F. Engle, C.W.J. Granger, Co-integration and error correction: representation, estimation, and testing, Econometrica 55 (1987) 251-276.

al

[9] N.S. Balke, T.B. Fomby, Threshold cointegration, Int. Econ. Rev. (Philadelphia 38 (1997) 627-645.

urn

[10] H. Tong, K.S. Lim, Threshold autoregression, limit cycles and cyclical data-with discussion, J. R. Stat. Soc. Series B Stat. Methodol. 42 (1980) 245-292. [11] P.A.W. Lewis, J.G. Stevens, Nonlinear modeling of time series using multivariate adaptive regression splines (MARS), J. Am. Stat. Assoc. 86 (1991) 864-877. [12] B.P. Carlin, N.G. Polson, D.S. Stoffer, A Monte Carlo approach to nonnormal and nonlinear state-space modeling, J. Am. Stat. Assoc. 87 (1992) 493-500.

Jo

[13] R. Chen, R.S. Tsay, Nonlinear additive ARX models, J. Am. Stat. Assoc. 88 (1993) 955-967. [14] J.P. Burg, The relationship between maximum entropy spectra and maximum likelihood spectra, Geophysics 37 (1972) 375-376. [15] S.M. Pincus, Approximate entropy as a measure of system complexity, Proc. Nati. Acad. Sci. USA 88 (1991) 2297-2301. [16] J.S. Richman, J.R. Moorman, Physiological time-series analysis using approximate entropy and sample entropy, Am. J. Physiol. Heart Circ. Physiol. 278 (2000) H2039-H2049. 37

Journal Pre-proof

[17] C. Bandt, B. Pompe, Permutation entropy: a natural complexity measure for time series, Phys. Rev. Lett. 88 (2002) 174102. [18] Y. Cao, W. Tung, J. Gao, V.A. Protopopescu, L.M. Hively, Detecting dynamical changes in time series using the permutation entropy, Phys. Rev. E 70 (2004) 046217.

of

[19] M. Rao, Y. Chen, B.C. Vemuri, F. Wang, Cumulative residual entropy: a new measure of information. IEEE Trans. Inf. Theory 50 (2004) 1220-1228.

p ro

[20] R. Vicente, M. Wibral, M. Lindner, G. Pipa, Transfer entropy–a model-free measure of effective connectivity for the neurosciences, J. Comput. Neurosci. 30 (2011) 45-67. [21] M. Costa, A.L. Goldberger, C.K. Peng, Multiscale entropy analysis of complex physiologic time series, Phys. Rev. Lett. 89 (2002) 068102.

Pr e-

[22] M. Costa, C.K. Peng, A.L. Goldberger, J.M. Hausdorff, Multiscale entropy analysis of human gait dynamics, Physica A 330 (2003) 53-60. [23] M. Costa, A.L. Goldberger, C.K. Peng, Multiscale entropy analysis of biological signals, Phys. Rev. E 71 (2005) 021906. [24] R.A. Thuraisingham, G.A. Gottwald, On multiscale entropy analysis for physiological data, Physica A 366 (2006) 323-332.

al

[25] J. Escudero, D. Ab´asolo, R. Hornero, P. Espino, M. L´opez, Analysis of electroencephalograms in Alzheimer’s disease patients with multiscale entropy, Physiol. Meas. 27 (2006) 1091. [26] M.U. Ahmed, D.P. Mandic, Multivariate multiscale entropy: A tool for complexity analysis of multichannel data, Phys. Rev. E 84 (2011) 061918.

urn

[27] T.J. Cornwell, Multiscale CLEAN deconvolution of radio synthesis images, IEEE J. Sel. Top Signal Process 2 (2008) 793-801. [28] B. LeBaron, W.B. Arthur, R. Palmer, Time series properties of an artificial stock market, J. Econ. Dyn. Control 23 (1999) 1487-1516.

Jo

[29] K. Huarng, H.K. Yu, A type 2 fuzzy time series model for stock index forecasting, Physica A 353 (2005) 445-462. [30] Y. Zhang, P. Shang, Permutation entropy analysis of financial time series based on Hill’s diversity number, Commun. Nonlinear Sci. Numer. Simul. 53 (2017) 288-298. [31] M. Xu, P. Shang, J. Huang, Modified generalized sample entropy and surrogate data analysis for stock markets, Commun. Nonlinear Sci. Numer. Simul 35 (2016) 17-24. [32] B. Fadlallah, B. Chen, A. Keil, J. Pr´ıncipe, Weighted-permutation entropy: A complexity measure for time series incorporating amplitude information, Phys. Rev. E 87 (2013) 022911. 38

Journal Pre-proof

[33] Y. Yin, P. Shang, Weighted multiscale permutation entropy of financial time series, Nonlinear Dynamics 78 (2014) 2921-2939. [34] S. Chen, P. Shang, Y. Wu, Weighted multiscale R´enyi permutation entropy of nonlinear time series, Physica A 496 (2018) 548-570.

of

[35] A. R´enyi, On measures of entropy and information, in: Proc. Fourth Berkeley Symp. on Math. Statist. and Prob, University of California Press, 1961.

Jo

urn

al

Pr e-

p ro

[36] M. Asadi, Y. Zohrevand, On the dynamic cumulative residual entropy, J. Stat. Plan. Inference 137 (2007) 1931-1941.

39