Chaotic time series prediction via artificial neural square fuzzy inference system

Chaotic time series prediction via artificial neural square fuzzy inference system

ARTICLE IN PRESS JID: ESWA [m5G;March 1, 2016;14:2] Expert Systems With Applications xxx (2016) xxx–xxx Contents lists available at ScienceDirect ...

2MB Sizes 0 Downloads 24 Views

ARTICLE IN PRESS

JID: ESWA

[m5G;March 1, 2016;14:2]

Expert Systems With Applications xxx (2016) xxx–xxx

Contents lists available at ScienceDirect

Expert Systems With Applications journal homepage: www.elsevier.com/locate/eswa

Chaotic time series prediction via artificial neural square fuzzy inference system Gholamali Heydari a,∗, Ali Akbar Gharaveisi b, MohammadAli Vali a

Q1

a b

Department of Applied Mathematics, Faculty of Mathematics and Computer, Shahid Bahonar University of Kerman, Kerman, Iran Department of Electrical Engineering, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran

a r t i c l e

i n f o

Keywords: Second order TSK Chaotic time series Fuzzy systems

a b s t r a c t The present article investigates the application of second order TSK (Takagi Sugeno Kang) fuzzy systems in predicting chaotic time series. A method has been introduced for training second order TSK fuzzy systems using ANFIS (Artificial Neural Fuzzy Inference System) training method. In a second order TSK system existence of nonlinear terms in the rules’ consequence prohibits use of current available ANFIS codes as is but the proposed method makes it possible to use ANFIS for a class of simplified second order TSK systems. The main impact of this method on the expert and intelligent systems is to provide a new way for modeling and predicting the future situation of more complex phenomena with a smaller decision rule base. The most significance of the proposed method is the simplicity and available code reuse property. As a case study the proposed method is used for the prediction of chaotic time series. Error comparison shows that the proposed method trains the second order TSK system more effectively. © 2016 Published by Elsevier Ltd.

1

1. Introduction

2

TSK fuzzy systems are of the most popular fuzzy systems in modeling nonlinear functions (Takagi & Sugeno, 1985). The consequence of each fuzzy rule in these systems is a constant value or a linear combination of input variables that are called zero order and first order TSK fuzzy systems respectively. TSK fuzzy systems aim to simplify and speed-up defuzzification calculations, which is a bottle-neck in real time control systems (Takács, 2004; Jassbi, Serra, Ribeiro, & Donati, 2006). This simplification leads to loss of some key properties of native fuzzy systems (namely Mamdani Systems) (Takács, 2004; Hamam & Georganas, 2008). Also using traditional TSK systems in modeling complex problems including multivariable nonlinear functions leads to fuzzy systems with many rules and membership functions (MFs) which is undesirable (Ren, 2007). High order TSK fuzzy systems have been introduced to overcome this problem and to increase the transparency and interpretability of TSK systems while preserving computational advantages (Ren, 2007; Buckley, 1992; Demirli & Muthukumaran, 20 0 0; Kasabov, 20 02; Cavallo, 20 05; Herrera, Pomares, Rojas, Valenzuela, & Prieto, 2005). Prediction of chaotic time series (specially the Mackey–Glass time series as a benchmark (Mackey & Glass, 1977)) is regarded as

3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22



Corresponding author. Tel./fax: +98 34 3325 7500. E-mail addresses: [email protected] (G. Heydari), [email protected] (A.A. Gharaveisi), [email protected] (M. Vali).

a popular research field of the applied mathematics (e.g. (Kasabov, 2002; Kalhor, Araabi, & Lucasi, 2010; Almaraashi & Coupland, 2010; Almaraashi & John, 2011; Sugiarto & Natarajan, 2007; Chen, Yang, Dong, & Abraham, 2004)). TSK fuzzy systems are one of the most useful tools in predicting chaotic time series (Kasabov, 2002). Using high order TSK fuzzy systems in the prediction of chaotic time series has received more attention in recent years (Askari & Montazerin, 2015; Gangwar & Kumar, 2012; Chen & Chen, 2011; Chen & Tanuwijaya, 2011; Egrioglu, Aladag, Yolcu, Uslu, & Basaran, 2010; Kalhor et al., 2010; Kasabov, 2002). Some other applications of high order TSK systems in this field can be found by Theochaires, 2006; Song, Ma, and Kasabov, 2005; Kim, Kyung, Park, Kim, and Park, 2004). High order TSK systems will be much more useful if one finds a good training method for them similar to ANFIS for zero and first order TSK systems (Jang, 1993). In this paper first the structure of a second order TSK system is explained and a Transformed Second order TSK Learning method is proposed to tune parameters of a second order TSK system as it was possible to train zero and first order TSK systems by available ANFIS codes.

23

2. Second order TSK system modification

42

2.1. The approximation property of high order TSK systems

43

According to Wang, 1997 (Section 9.2) all fuzzy systems defined over the compact n-dimensional space D that:

44



Use Gaussian input membership functions.

http://dx.doi.org/10.1016/j.eswa.2016.02.031 0957-4174/© 2016 Published by Elsevier Ltd.

Please cite this article as: G. Heydari et al., Chaotic time series prediction via artificial neural square fuzzy inference system, Expert Systems With Applications (2016), http://dx.doi.org/10.1016/j.eswa.2016.02.031

24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41

45 46

ARTICLE IN PRESS

JID: ESWA 2

[m5G;March 1, 2016;14:2]

G. Heydari et al. / Expert Systems With Applications xxx (2016) xxx–xxx

Fig. 1. Second order TSK system as the Type-3 fuzzy reasoning system.

And the output will be the weighted average of rules’ outputs:

r l l l=1 w y y=  r l l=1 y

Fig. 2. Structure of a fuzzy system.

47



48 49 50 51 52 53 54 55 56



Employ production for the AND method (t-norm) and aggregation. Uses center average as the deffuzification method.

fulfill the Stone–Weierstrass conditions for any continuous function like g(x) defined on D, namely if S represents the set of all possible fuzzy systems defined on D while preserving the above specifications, then for any g ∈ C(D) and any real number like ε > 0 there exists a fuzzy system F ∈ S that can approximate g with an absolute error less than ε . If F(x) represents the fuzzy system output for input vector x ∈ D then:

∀ε > 0 & ∀g ∈ C (D )∃ F ∈ S s.t.|F (x ) − g(x )| < ε ∀x ∈ D 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75

(1)

So such fuzzy systems can perform nonlinear function approximation with an arbitrary precision. Also Jang, 1993 claims that TSK fuzzy system with bell shaped input membership function can perform similar approximation too. As the view point of (Jang, 1993) a high order TSK system is a type-3 fuzzy reasoning one, which also uses higher degrees of input monomials in its layer 4 (as illustrated in Fig. 1 for a second order TSK system) (Heydari, Gharaveisi, & Vali, 2015). Considering the same proof as mentioned by Jang, 1993, and the fact that the “Simplified ANFIS” by Jang, 1993 is also a proper subset of high order TSK systems it can be concluded that high order TSK systems are also universal approximators (Heydari et al., 2015). As a managerial insight, prediction is a key factor in planning. High order TSK systems are also powerful tools for performing prediction. For example precise electricity (Eslahi Tatafi, Heydari, & Gharaveisi, 2014), oil or gold (Rajaee, 2013) price prediction is too valuable for pricing a factory product or to adjust budgetary for future. 2.2. General second order TSK system

77

Considering the general structure of a fuzzy system as Fig. 2, the rule base is the most different block in a second order TSK system. Supposing a two input TSK system, the lth rule will be:

78 79 80

yl = al00 + al10 x1 + al20 x2 + al11 x21 + al22 x22 + al12 x1 x2

(2)

82

(3)

where x1 and x2 are input variables, r is the total number of rules and wl is firing strength of each rule determined by input membership function and the fuzzification block. As mentioned by Jang, 1993 the training method is a key factor in the approximation quality of TSK systems, however a disappointing fact about high order TSK systems is that a straightforward learning method the same as ANFIS cannot be found in the literature that provides suitable speed and precision together (Herrera et al., 2005). For example Kasabov, 2002 replaces the consequence function by a small neural network to achieve high order TSK, also Kalhor et al., 2010 use deformed linear models which involves sophisticated optimization for finding optimum deformed linear models or a sequential learning method has been developed by Heydari et al., 2015. A suitable training process must tune input membership functions and determine values of rules’ coefficients (e.g., a1 00 ,a1 11 ,…) such that the overall fuzzy system performs desirably. According to Jang, 1993 and software applications available for tuning TSK systems parameters (like MATLAB fuzzy logic toolbox) it is seen that current codes do not support polynomials of a degree higher than one, as the consequent part of a rule in a TSK fuzzy inference system. While high order TSK systems include polynomials of degrees higher than one, as a contribution we introduce a transformation method for a class of second order TSK systems, by which it is possible to use the traditional ANFIS for tuning the parameters of these second order TSK systems.

83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108

2.3. Square TSK system (S-TSK) (Heydari et al., 2015)

109

Considering a simplified second order TSK system by omitting terms which consist of more than one variable (like x1 x2 in (2)), the typical form of the consequent of a rule say in a two input system will be:

110

yl = yl = al00 + al10 x1 + al20 x2 + al11 x21 + al22 x22 z1 =

y =

al00

+

(5)

al10 x1

+

al20 x2

+

al11 z1

+

al22 z2

115 116

(6)

where the parameters al00 , al10 , al20 , al11 and al22 can be found by ANFIS. This is the idea behind defining such simplified second order TSK system called in this paper a Square TSK (or S-TSK)

Please cite this article as: G. Heydari et al., Chaotic time series prediction via artificial neural square fuzzy inference system, Expert Systems With Applications (2016), http://dx.doi.org/10.1016/j.eswa.2016.02.031

113

114

x21 x22

And we get a new first order TSK system with four variables: x1 ,x2 ,z1 and z2 , which its lth rule’s consequent will be: l

111 112

(4)

Now one might consider:

z2 =

76

81

117 118 119

ARTICLE IN PRESS

JID: ESWA

[m5G;March 1, 2016;14:2]

G. Heydari et al. / Expert Systems With Applications xxx (2016) xxx–xxx

120 121

system. Generally a system with n input variables will have a typical consequent as follows:

yl = al00 + al10 x1 + . . . + aln0 xn + al11 z1 + . . . + alnn zn 122 123 124 125 126

(7)

2

where xi = zi for i = 1,2,…,n. Let input sets be Aki ,i and Bkn+i ,n+i defined by membership functions μki ,i (xi ) and μkn+i ,n+i (zi ) respectively, where i is as defined and kj = 1,2,…,p that p is the number of input membership functions for each input variable and j = 1,2,…,2n. So the lth rule of such system looks as follows:

IF x1 ∈ Ak1 ,1 AND x2 ∈ Ak2 ,2 AND . . . AND xn ∈ Akn ,n AND z1 ∈ Bkn+1 ,1 AND z2 ∈ Bkn+2 ,2 AND . . . AND zn ∈ Bk2n ,n T HEN yl = al00 + al10 x1 + · · · + aln0 xn + al11 z1 + . . . + alnn zn .

129 130

3. Transformed second order TSK learning method

131

3.1. Main definition

132

As mentioned previously the main advantage of S-TSK is the possibility of being trained by ANFIS (Jang, 1993) similar to first order TSK. Suppose X is an m by n matrix containing the input values of a set of data including m points from the n-input system that is to be modeled, and Y is an m by one vector containing corresponding output values. Consider X and Y as following:

133 134 135 136 137



x11 .. ⎣ X= . xm1 138

... .. . ...

140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161

⎡ ⎤ y1

Y = ⎣ ... ⎦



(9)

ym

Now define X∗ as follows:



x11 .. ∗ ⎣ X = . xm1

... .. . ...

x1n x211 .. .. . . xmn x2m1

... .. . ...

x21n .. ⎦ . x2mn

x11 .. ⎣ = . xm1

... .. . ...

x1n z11 .. .. . . xmn zm1

... .. . ...

z 1n .. ⎦ . zmn



139



x1n .. ⎦, . xmn

Fig. 3. Trained TSK system.

(8)

It must be noted that for a specific index i we have xi 2 = zi and if xi ∈ Aki ,i then Bkn+i ,i that contains zi cannot be chosen arbitrarily as it is discussed later.

127 128

3

Fig. 4. Input MFs.



(10)

X∗

Feeding as input and Y as desired output to ANFIS for training results a first order TSK system with 2n inputs and one output. Now it is possible to use the trained system, however this system can be represented as an n input system by replacing zi by xi 2 and merging corresponding sets of xi and xi 2 by a t-norm. The following example illustrates this procedure. It worth saying that the recent method is applicable to TSK systems with an order higher than 2, but for the sake of simplicity the aim of this paper is restricted to developing a learning method just for second order TSK systems. 3.2. Theoretical comparison Having a fair look, it is worth mentioning that the proposed method has the valuable advantage of code reuse among all other well-known high order TSK training methods. For example methods mentioned by Kalhor et al., 2010 and Kasabov, 2002, use a totally new code to perform training on high order TSK systems, but the current method and the sequential method by Heydari et al., 2015 use the same code available for ANFIS in a special manner to train high order TSK systems so the code development is much easier and more efficient for them. This is also clearly seen in the results given in this paper and Heydari et al., 2015.

As a main disadvantage for the proposed method we must note that methods introduced by Kasabov, 2002 and Kalhor et al., 2010, straightly produce a high order TSK system but for the proposed method this is not always the case due to the dependency between input variables of the transformed S-TSK system. For an input like xi there exists a dependent variable zi = xi 2 and it can be supposed that in a specific rule we have

162 163 164 165 166 167 168

IF x1 ∈ Ak1 ,1 AN D · · · xi ∈ Aki ,i AN D . . . AN D xn ∈ Akn ,n AN D z1 ∈ Bkn+1 ,1 AN D · · · zi ∈ Bkn+i ,i AN D . . . AN D zn ∈ Bk2n ,n T HEN yl = al00 + al10 x1 + · · · + aln0 xn + al11 z1 + · · · + alnn zn

(11)

So Aki ,i and Bkn+i ,n+i must have large enough intersection otherwise the proposed rule will not be fired at all. This condition is not guaranteed by ANFIS while training the transformed S-TSK system because the ANFIS has no sense from the dependency between xi and zi . So it is expected to have some useless rules in the trained systems that can be eliminated from the rule base. Moreover eliminating terms containing more than one variable from the polynomial of the consequent part weakens the modeling power of the fuzzy system when there is an interaction between input variables.

169

3.3. An example

179

Suppose estimating y = sin(x1 ), x1 ∈[–π ,π ] by 12 equally spaced points of real data. Following the proposed method using 2 Gaussian MFs for each input (p = 2), after 100 epochs the ANFIS algorithm returns a 2 input first order TSK system (Figure 3) with following rules and MFs (Fig. 4): If x1 ∈ A1,1 AND z1 ∈ B1,1 THEN y1 = 0.8194x1 + 0.3218z1 − 0.4205

Please cite this article as: G. Heydari et al., Chaotic time series prediction via artificial neural square fuzzy inference system, Expert Systems With Applications (2016), http://dx.doi.org/10.1016/j.eswa.2016.02.031

170 171 172 173 174 175 176 177 178

180 181 182 183 184 185 186

ARTICLE IN PRESS

JID: ESWA 4

G. Heydari et al. / Expert Systems With Applications xxx (2016) xxx–xxx

Fig. 5. Input MFs after replacing z1 by x1 2 .

187 188 189 190 191 192

194 195 196 197 198 199 200 201 202 203 204 205 206

Fig. 6. MFs of the S-TSK system.

If x1 ∈ A1,1 AND z1 ∈ B2,1 THEN y2 = 0.6466x1 + 0.2601z1 − 0.3037 If x1 ∈ A2,1 AND z1 ∈ B1,1 THEN y3 = 0.8194x1 − 0.3218z1 + 0.4205 If x1 ∈ A2,1 AND z1 ∈ B2,1 THEN y4 = 0.6465x1 − 0.2601z1 + 0.3037

  (x + 3.1710)2 μ1,1 (x1 ) = exp − 1 , 2 × 2.59812   (x − 3.1710)2 μ2,1 (x1 ) = exp − 1 2 × 2.59812   (z − 0.1904)2 , μ1,2 (z1 ) = exp − 1 2 × 4.31762   (z − 9.8450)2 μ2,2 (z1 ) = exp − 1 2 2 × 4.3023

193

(12) Fig. 7. The Mackey–Glass chaotic time series.

Replacing z1 by x1 2 in (12) and plotting all MFs on a single plane gives: In Fig. 5 μ1,2 and μ2,2 show the membership of x1 to B1,1 and B2,1 , so the rules can be rewritten: If x1 ∈ A1,1 0.4205 If x1 ∈ A1,1 0.3037 If x1 ∈ A2,1 0.4205 If x1 ∈ A2,1 0.3037

AND x1 ∈ B1,1 THEN y1 = 0.8194x1 + 0.3218 x1 2 − AND x1 ∈ B2,1 THEN y2 = 0.6466x1 + 0.2601 x1 2 −

4. Predicting the Mackey–Glass chaotic time series

Using production as the AND method, the above rules’ antecedents will reduce to:

Ifx1 ∈ C3 THENy3 = 0.8194x1 − 0.3218 x1 + 0.4205 Ifx1 ∈ C4 THENy4 = 0.6465x1 − 0.2601 x1 2 + 0.3037

(13)

Where the membership functions defining C1 to C4 are as follows (illustrated in Fig. 6):

μC1 (x1 ) = μ1,1 (x1 )μ1,2 (x1 ), μC2 (x1 ) = μ1,1 (x1 )μ2,2 (x1 ) μC3 (x1 ) = μ2,1 (x1 )μ1,2 (x1 ), μC4 (x1 ) = μ2,1 (x1 )μ2,2 (x1 ) 210

213 214 215 216 217 218

As mentioned previously one of the most famous benchmark problems for qualifying a predictor system is the Mackey–Glass chaotic time series. The general formulation of this series is given by the following differential equation:

dx(t ) ax(t − τ ) = − bx(t ) dt 1 + xn (t − τ )

2

209

211 212

AND x1 ∈ B2,1 THEN y4 = 0.6465x1 − 0.2601 x1 2 +

Ifx1 ∈ C2 THENy2 = 0.6466x1 + 0.2601 x1 2 − 0.3037

207

TSK system (or actually an S-TSK system) which has been originally created by ANFIS. The mentioned example was too simple, in a general case as it was noted previously some MFs acquired by the AND operation between MFs defined as Aki ,i and Bkn+i ,i (namely Ci MFs in the previous example), have an ignorable value and can be eliminated and the rule including such MFs will be eliminated from the rule base too (as discussed in 2.3).

AND x1 ∈ B1,1 THEN y3 = 0.8194x1 − 0.3218 x1 2 +

Ifx1 ∈ C1 THENy1 = 0.8194x1 + 0.3218 x1 2 − 0.4205

208

[m5G;March 1, 2016;14:2]

(14)

It is clear that rules mentioned in (13) and input sets C1 to C4 defined by μC1 to μC4 (Fig. 6) totally define a second order

221 222

(15)

This differential equation shows a chaotic behavior for a = 0.2, b = 0.1, n = 10, τ = 17 and the initial value x(0) = 1.2. Fig. 7 shows the value of x(t) corresponding to these parameter values for t ∈ [0,500]. In all studied methods the fuzzy system is supposed to take the values x(t–18), x(t–12), x(t–6) and x(t) as inputs and predict x(t+85) as the output, so the fuzzy system must have four inputs and one output. Each input has two Gaussian membership functions similar to Fig. 8.

Please cite this article as: G. Heydari et al., Chaotic time series prediction via artificial neural square fuzzy inference system, Expert Systems With Applications (2016), http://dx.doi.org/10.1016/j.eswa.2016.02.031

219 220

223 224 225 226 227 228 229 230 231

ARTICLE IN PRESS

JID: ESWA

[m5G;March 1, 2016;14:2]

G. Heydari et al. / Expert Systems With Applications xxx (2016) xxx–xxx

5

Case 1: including x2 (t ) as a fuzzy system input Number of fuzzy system inputs = 5, number of input MFs = 10, number of rules = 32, total number of fuzzy system parameters to be calculated = 10 × 2 + 32 × 6 = 212. A typical rule of this fuzzy system is:

Fig. 8. Sample input membership functions.

232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248

While each input has two membership functions there will be 16 rules for 4 inputs, each rule’s output is defined as a polynomial which its coefficients are determined in the training phase. Here, 30 0 0 evenly sampled data points, for t = 201 to 3200, are chosen as the training data, and 500 data points, for t = 5001 to 5500, are used as the validation data. Several cases of the proposed algorithm are tested on these data sets of the Mackey–Glass time series. In each case one or more number of input variables are squared and regarded as a new input variable. For a fair comparison results are compared with another famous algorithm DENFIS (Kasabov, 2002) in three different cases (on-line, first order off-line, high order off-line) and the sequential learning method developed by Heydari et al., 2015. The comparison is made according to root mean square error (RMSE) and non-dimensional error index (NDEI) criteria. If there exist n points to check and yi and yˆi denote the real data and estimated values respectively (for i = 1,2,…,n) then:



RMSE = 249

NDEI = 250 251 252 253

n i=1

2

(yi − yˆi ) n

RMSE standard deviation o f [y1 y2 . . . yn ]

(16)

(17)

To study the effectiveness of the proposed method that how much the added precision, increases the calculation time, relative to the traditional ANFIS algorithm, a criterion is defined as follows (relative effectiveness percentage (RE%)):

RMSEANF IS × tANF IS RE% = × 100 RMSEMethod × tMethod

(18)

258

where RMSEMethod and tMethod are the validation RMSE and training duration of the method being compared by ANFIS respectively. RMSEANFIS and tANFIS are same values for the ANFIS. A larger RE value shows that the compared method can increase the precision in a relatively shorter training duration.

259

5. Simulation results and comparison

254 255 256 257

260 261 262 263 264

As mentioned in the previous section the proposed method is run in various cases. They consist of choosing one or more input variables to be squared and fed as a separate input to the fuzzy system. Here the results of each case are given including RMSE and NDEI values for training and validation.

265 266 267 268 269

IF x(t) ∈ Ak1,1 & x(t–6) ∈ Ak2,2 & x(t–12) ∈ Ak3,3 & x(t–18) ∈ Ak4,4 & x2 (t) ∈ Bk5,1 THEN yl = al,0 + al,1 x(t) + al,2 x(t–6) + al,3 x(t–12) + al,4 x(t-18) + bl,1 x2 (t) Where ki ∈ {1,2} here after.

270

In training RMSE = 0.0201, NDEI = 0.0888 and in validation RMSE = 0.0208, NDEI = 0.0923. Case 2: including x2 (t − 6 ) as a fuzzy system input Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 1. A typical rule of this system is:

275

271 272 273 274

276 277 278 279 280 281

IF x(t) ∈ Ak1,1 & x(t–6) ∈ Ak2,2 & x(t–12) ∈ Ak3,3 & x(t–18) ∈ Ak4,4 & x2 (t–6) ∈ Bk5,2 THEN yl = al,0 + al,1 x(t) + al,2 x(t–6) + al,3 x(t–12) + al,4 x(t–18) + bl,1 x2 (t–6)

282

In training RMSE = 0.0208, NDEI = 0.0921 and in validation RMSE = 0.0219, NDEI = 0.0971. Case 3: including x2 (t − 12 ) as a fuzzy system input Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 1. A typical rule of this system is:

286

IF x(t) ∈ Ak1,1 & x(t–6) ∈ Ak2,2 & x(t–12) ∈ Ak3,3 & x(t–18) ∈ Ak4,4 & x2 (t–12) ∈ Bk5,3 THEN yl = al,0 + al,1 x(t) + al,2 x(t–6) + al,3 x(t–12) + al,4 x(t–18) + bl,1 x2 (t–12) In training RMSE = 0.0197, NDEI = 0.0868 and in validation RMSE = 0.0210, NDEI = 0.0931. Case 4: including x2 (t − 18 ) as a fuzzy system input Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 1. A typical rule of this system is: IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 & x2 (t-18) ∈ Bk5,4 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t-18)

283 284 285

287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307

In training RMSE = 0.0208, NDEI = 0.0918 and in validation RMSE = 0.0215, NDEI = 0.0956. Case 5: including x2 (t ) and x2 (t − 6 ) as fuzzy system inputs Number of fuzzy system inputs = 6, number of input MFs = 12, number of rules = 64, total number of fuzzy system parameters to be calculated = 12 × 2 + 64 × 7 = 472. A typical rule of this fuzzy system is:

308

IF x(t) ∈ Ak1,1 & x(t–6) ∈ Ak2,2 & x(t–12) ∈ Ak3,3 & x(t–18) ∈ Ak4,4 & x2 (t) ∈ Bk5,1 & x2 (t–6) ∈ Bk6,2 THEN yl = al,0 + al,1 x(t) + al,2 x(t–6) + al,3 x(t–12) + al,4 x(t–18) + bl,1 x2 (t) + bl,2 x2 (t–6)

316

In training RMSE = 0.0129, NDEI = 0.0572 and in validation RMSE = 0.0135, NDEI = 0.0601. Case 6: including x2 (t ) and x2 (t − 12 ) as fuzzy system inputs

320

Please cite this article as: G. Heydari et al., Chaotic time series prediction via artificial neural square fuzzy inference system, Expert Systems With Applications (2016), http://dx.doi.org/10.1016/j.eswa.2016.02.031

309 310 311 312 313 314 315

317 318 319

321 322

JID: ESWA 6

323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381

ARTICLE IN PRESS

[m5G;March 1, 2016;14:2]

G. Heydari et al. / Expert Systems With Applications xxx (2016) xxx–xxx

Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 5. A typical rule of this system is:

Number of fuzzy system inputs = 7, number of input MFs = 14, number of rules = 128, total number of fuzzy system parameters to be calculated = 14 × 2 + 128 × 8 = 1052. A typical rule of this system is:

382

IF x(t) ∈ Ak1,1 & x(t–6) ∈ Ak2,2 & x(t–12) ∈ Ak3,3 & x(t–18) ∈ Ak4,4 & x2 (t) ∈ Bk5,1 & x2 (t–12) ∈ Bk6,3 THEN yl = al,0 + al,1 x(t) + al,2 x(t–6) + al,3 x(t–12) + al,4 x(t–18) + bl,1 x2 (t) + bl,2 x2 (t–12)

IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 & x2 (t) ∈ Bk5,1 & x2 (t-6) ∈ Bk6,2 & x2 (t-12) ∈ Bk7,3 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t) + bl,2 x2 (t-6) + bl,3 x2 (t-12)

386

In training RMSE = 0.0131, NDEI = 0.0578 and in validation RMSE = 0.0146, NDEI = 0.0648. Case 7: including x2 (t ) and x2 (t − 18 ) as fuzzy system inputs Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 5. A typical rule of this system is:

In training RMSE = 0.0092, NDEI = 0.0408 and in validation RMSE = 0.0110, NDEI = 0.0491. Case 12: including x2 (t ), x2 (t − 6 ) and x2 (t − 18 ) as fuzzy system inputs Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 11. A typical rule of this system is:

390

IF x(t) ∈ Ak1,1 & x(t–6) ∈ Ak2,2 & x(t–12) ∈ Ak3,3 & x(t–18) ∈ Ak4,4 & x2 (t) ∈ Bk5,1 & x2 (t–18) ∈ Bk6,4 THEN yl = al,0 + al,1 x(t) + al,2 x(t–6) + al,3 x(t–12) + al,4 x(t–18) + bl,1 x2 (t) + bl,2 x2 (t–18) In training RMSE = 0.0135, NDEI = 0.0598 and in validation RMSE = 0.0145, NDEI = 0.0643. Case 8: including x2 (t − 6 ) and x2 (t − 12 ) as fuzzy system inputs Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 5. A typical rule of this system is: IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 & x2 (t-6) ∈ Bk5,2 & x2 (t-12) ∈ Bk6,3 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t-6) + bl,2 x2 (t-12) In training RMSE = 0.0147, NDEI = 0.0651and in validation RMSE = 0.0154, NDEI = 0.0685. Case 9: including x2 (t − 6 ) and x2 (t − 18 ) as fuzzy system inputs Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 5. A typical rule of this system is: IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 & x2 (t-6) ∈ Bk5,2 & x2 (t-18) ∈ Bk6,4 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t-6) + bl,2 x2 (t-18) In training RMSE = 0.0138, NDEI = 0.0611 and in validation RMSE = 0.0151, NDEI = 0.0672. Case 10: including x2 (t − 18 ) and x2 (t − 12 ) as fuzzy system inputs Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 5. A typical rule of this system is: IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 & x2 (t-12) ∈ Bk5,3 & x2 (t-18) ∈ Bk6,4 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t-12) + bl,2 x2 (t-18) In training RMSE = 0.0138, NDEI = 0.0611 and in validation RMSE = 0.0151, NDEI = 0.0672 (Very near to the previous case). Case 11: including x2 (t ), x2 (t − 6 ) and x2 (t − 12 ) as fuzzy system inputs

IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 & x2 (t) ∈ Bk5,1 & x2 (t-6) ∈ Bk6,2 & x2 (t-18) ∈ Bk7,4 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t) + bl,2 x2 (t-6) + bl,3 x2 (t-18) In training RMSE = 0.0103, NDEI = 0.0456 and in validation RMSE = 0.0115, NDEI = 0.0510. Case 13: including x2 (t ), x2 (t − 12 ) and x2 (t − 18 ) as fuzzy system inputs Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 11. A typical rule of this system is: IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 & x2 (t) ∈ Bk5,1 & x2 (t-12) ∈ Bk6,3 & x2 (t-18) ∈ Bk7,4 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t) + bl,2 x2 (t-12) + bl,3 x2 (t-18) In training RMSE = 0.0109, NDEI = 0.0483 and in validation RMSE = 0.0124, NDEI = 0.0552. Case 14: including x2 (t − 6 ), x2 (t − 12 ) and x2 (t − 18 ) as fuzzy system inputs Number of fuzzy system inputs, number of input MFs, number of rules and total number of fuzzy system parameters that have to be calculated are the same as case 11. A typical rule of this system is:

383 384 385

387 388 389

391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421

IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 & x2 (t-6) ∈ Bk5,2 & x2 (t-12) ∈ Bk6,3 & x2 (t-18) ∈ Bk7,4 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t-6) + bl,2 x2 (t-12) + bl,3 x2 (t-18)

422

In training RMSE = 0.0112, NDEI = 0.0497 and in validation RMSE = 0.0129, NDEI = 0.0572. Case 15: including x2 (t ), x2 (t − 6 ), x2 (t − 12 ) and x2 (t − 18 ) as fuzzy system inputs Number of fuzzy system inputs = 8, number of input MFs = 16, number of rules = 256, total number of fuzzy system parameters to be calculated = 16 × 2 + 256 × 9 = 2336. A typical rule of this system is:

426

IF x(t) ∈ Ak1,1 & x(t-6) ∈ Ak2,2 & x(t-12) ∈ Ak3,3 & x(t-18) ∈ Ak4,4 &… …& x2 (t) ∈ Bk5,1 & x2 (t-6) ∈ Bk6,2 & x2 (t-12) ∈ Bk7,3 & x2 (t-18) ∈ Bk8,4 THEN yl = al,0 + al,1 x(t) + al,2 x(t-6) + al,3 x(t-12) + al,4 x(t-18) + bl,1 x2 (t) + bl,2 x2 (t-6) + bl,3 x2 (t-12) + bl,4 x2 (t-18)

434

Please cite this article as: G. Heydari et al., Chaotic time series prediction via artificial neural square fuzzy inference system, Expert Systems With Applications (2016), http://dx.doi.org/10.1016/j.eswa.2016.02.031

423 424 425

427 428 429 430 431 432 433

435 436 437 438 439

JID: ESWA

ARTICLE IN PRESS

[m5G;March 1, 2016;14:2]

G. Heydari et al. / Expert Systems With Applications xxx (2016) xxx–xxx

7

Fig. 9. Input MFs of trained fuzzy system in the Case 15. Table 1 Overall comparison. Method

Transformed Second order TSK learning

440 441 442 443 444 445 446

No. of rules

ANFIS DENFIS I DENFIS II DENFIS II with MLP Sequential Method Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Case 7 Case 8 Case 9 Case 10 Case 11 Case 12 Case 13 Case 14 Case 15

16 794 794 262 16 32 32 32 32 64 64 64 64 64 64 128 128 128 128 256

In training RMSE = 0.0080, NDEI = 0.0353 and in validation RMSE = 0.0098, NDEI = 0.0436. Between 5 input systems (cases 1 to 4) case 1 has the best validation result, for 6 input systems (cases 5 to 10) case 7 is the best and for 7 input systems (cases 11 to 14) case 11 is the best and finally the case 15 with 8 inputs is the best of all. Fig. 9 illustrate input MFs of trained fuzzy system for the case 15.

Training

Validation

Epochs

RMSE

NDEI

RMSE

NDEI

50 2 2 100 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50

0.0302 0.0439 0.0780 0.0135 0.0165 0.0201 0.0208 0.0197 0.0208 0.0129 0.0131 0.0135 0.0147 0.0138 0.0151 0.0092 0.0103 0.0109 0.0113 0.0080

0.1334 0.1939 0.3444 0.0595 0.0745 0.0888 0.0920 0.0868 0.0918 0.0572 0.0578 0.0597 0.0651 0.0611 0.0668 0.0408 0.0456 0.0483 0.0497 0.0353

0.0325 0.0295 0.0809 0.0149 0.0186 0.0208 0.0219 0.0210 0.0215 0.0135 0.0146 0.0145 0.0154 0.0151 0.0166 0.0110 0.0115 0.0124 0.0129 0.0098

0.1441 0.1310 0.3591 0.0661 0.0810 0.0923 0.0971 0.0931 0.0956 0.0601 0.0649 0.0643 0.0685 0.0672 0.0736 0.0491 0.0510 0.0552 0.0572 0.0436

RE%

RT

100 15 12.6 0.14 11 30.9 29.2 30.2 32.3 8.30 8.05 8.23 7.44 7.09 7.17 1.17 1.1 1.02 0.99 0.23

1 7.3 0.96 46839 16.8 5.05 5.07 5.12 4.67 28.9 27.6 27.3 28.3 30.2 27.3 251 257 256 254 1448

For a fair comparison the problem of predicting the Mackey– Glass time series with the same properties is solved by DENFIS algorithm in three modes: on-line training, off-line training using first order TSK system and off-line training using high order TSK system. The latter uses multilayer perceptron (MLP) to realize high order TSK systems. The three different modes are named “DENFIS I”, “DENFIS II” and “DENFIS II with MLP”, respectively. The problem

Please cite this article as: G. Heydari et al., Chaotic time series prediction via artificial neural square fuzzy inference system, Expert Systems With Applications (2016), http://dx.doi.org/10.1016/j.eswa.2016.02.031

447 448 449 450 451 452 453

JID: ESWA 8

ARTICLE IN PRESS G. Heydari et al. / Expert Systems With Applications xxx (2016) xxx–xxx

458

is also solved by the sequential algorithm introduced by Heydari et al., 2015 and its result is available too. Table 1 gives the overall comparison of various cases of the proposed method, DENFIS, the sequential method and ANFIS. The last column “RT” shows the training duration relative to the training duration of ANFIS.

459

6. Discussion

460

471

A discussion about bolded items in Table 1: Cases 1, 7 and 11 as discussed previously give the best result regarding fixed number of rules. Case 15 is the most precise predictor compared to all others. Case 4 has the best RE% which means that it can increase the precision with less training time than other training types. Also DENFIS II has the shortest training time but also the worst validation NDEI. DENFIS II with MLP has an almost good validation NDEI but it needs a training time too much longer than case 15 and this causes a very small RE% both of them. The sequential method also had a relatively better performance than cases 1 to 4 but in all other cases systems trained by the proposed method have a smaller RMSE and NDEI.

472

7. Conclusion and recommendation

473

The main practical contribution of the present article is development of a training method based on the famous ANFIS algorithm that can handle a class of second order TSK systems (called Square TSK systems in this paper). This method provides a new way for modeling and predicting the future situation of more complex phenomena with a smaller decision rule base while preserving the simplicity and code reuse property. The proposed Transformed Second order TSK Learning method was applied to a set of data acquired from the Mackey–Glass chaotic time series. The performance of the trained system was measured by the RMSE and NDEI and compared with DENFIS, a sequential method and traditional ANFIS algorithms. Results show that the proposed method has a good approach and can train the second order system so as it gives a noticeably much better performance than TSK systems result by other methods. More over results show that the proposed method is more efficient than DENFIS training method because DENFIS takes too long to give the best result. As a limitation there might be some useless rules in the trained systems that can be eliminated from the rule base and also lack of terms containing more than one variable in the polynomial of the rules’ consequent part weakens the modeling power of the fuzzy system when there is an interaction between input variables. For future works it is suggested to study on methods for training general second TSK systems and developing a general method for higher order TSK systems, however extending the traditional ANFIS method for high order TSK systems is also valuable. Other recommended fields include studying properties of high order TSK systems in analyzing data properties and data mining and studying the performance of TSK systems while using different function classes as the consequent part of rules.

454 455 456 457

461 462 463 464 465 466 467 468 469 470

474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 Q2 504

505

[m5G;March 1, 2016;14:2]

Uncited references: Heydari, 2015, Takagi, 1985

506

References

507 508 509

Almaraashi, M., & Coupland, S. (2010). Time series forecasting using a tsk fuzzy system tuned with simulated annealing. In IEEE International Conference on Fuzzy Systems (FUZZ).

Almaraashi, M., & John, R. (2011). Tuning of type-2 fuzzy systems by simulated annealing to predict time series. In Proceedings of the World Congress on Engineering Vol.II. Askari, S., & Montazerin, N. (2015). A high-order multi-variable fuzzy time series forecasting algorithm based on fuzzy clustering. Expert Systems with Applications, 2121–2135. Buckley, J. J. (1992). Universal fuzzy controllers. Automatica, 28(6), 1245–1248. Cavallo, A. (2005). High-order fuzzy sliding manifold control. Fuzzy Sets and Systems, 156(2), 249–266. Chen, S., & Chen, C. (2011). Handling forecasting problems based on high-order fuzzy logical relationships. Expert Systems with Applications pp. 3875–3864. Chen, S., & Tanuwijaya, K. (2011). Fuzzy forecasting based on high-order fuzzy logical relationships and automatic clustering techniques. Expert Systems with Applications, 15425–15437. Chen, Y., Yang, B., Dong, J., & Abraham, A. (2004). Time-series forecasting using flexible neural tree model. Information Science. Elsevier. Demirli, K., & Muthukumaran, P. (20 0 0). Higher order fuzzy system identification using subtractive clustering. Journal of Intelligent and Fuzzy Systems, 9, 129–158. Egrioglu, E., Aladag, C., Yolcu, U., Uslu, V. R., & Basaran, M. A. (2010). Finding an optimal interval length in high order fuzzy time series. Expert Systems with Applications, 5052–5055. Eslahi Tatafi, S., Heydari, G., & Gharaveisi, A. (2014). Short-term electricity price forecasting using optimal tsk fuzzy systems. Journal of mathematics and computer science, 238–246. Gangwar, S., & Kumar, S. (2012). Partitions based computational method for highorder fuzzy time series forecasting. Expert Systems with Applications, 12158– 12164. Hamam, A., & Georganas, N. (2008). A comparison of Mamdani and Sugeno fuzzy inference systems for evaluating the quality of experience of hapto-audio-visual applications. IEEE Haptic Audio visual Environments and Games. Herrera, L., Pomares, H., Rojas, I., Valenzuela, O., & Prieto, A. (2005). TaSe, a Taylor series-based fuzzy system model that combines interpretability and accuracy. Fuzzy Sets and Systems, 153, 403–427. Heydari, G. (2015). Optimal design of nonlinear predictive controller based on highorder fuzzy systems. PhD Thesis Dec 30. A. Gharaveisi & M. Vali (Eds.) Kerman, Kerman, Iran: Shahid Bahonar University of Kerman. Heydari, G., Gharaveisi, A., & Vali, M. (2015). New formulation for representing higher order tsk fuzzy systems. IEEE Transactions on Fussy Systems, 1. Jang, J. (1993). Anfis: adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man, Cybernetics, 23, 665–685. Jassbi, J., Serra, P., Ribeiro, R., & Donati, A. (2006). A comparison of mandani and sugeno inference systems for a space fault detection application. In Automation Congress, 2006. WAC ’06. Kalhor, A., Araabi, B. N., & Lucasi, C. (2010). A new high-order takagi-sugeno fuzzy model based on deformed linear models. Amirkabir International Journal of Modeling, Identification, Simulation & Control, 42, 43–45. Kasabov, N. K. (2002). DENFIS: dynamic evolving neural-fuzzy inference system and its application for time-series prediction. IEEE TRANSACTIONS ON FUZZY SYSTEMS. Kim, K., Kyung, K. M., Park, C., Kim, E., & Park, M. (2004). Robust tsk fuzzy modeling approach using noise clustering concept for function approximation. Computational and Information Science, 3314. Mackey, M., & Glass, L. (1977). Oscillation and chaos in physiological control systems. Science, 197(4300), 287–289. Rajaee, R. (2013). Chaotic time series prediction using nonlinear dynamic interative systems model. M.Sc. Thesis. Kerman, Kerman, Iran: Shahid Bahonar University of Kerman. Ren, Q. (2007). High order type-2 takagi-sugeno-kang fuzzy logic system- Ph. D. Thesis. Ecole Polytechnique De Montreal. Song, Q., Ma, T., & Kasabov, N. (2005). Transductive knowledge based fuzzy inference system for personalized modeling. Fuzzy Systems and Knowledge Discovery, 3614. Sugiarto, I., & Natarajan, S. (2007). Parameter estimation using least square method for mimo takagi-sugeno neuro-fuzzy in time series forecasting. Jurnal Teknik Elektro, 7(2), 82–87. Takács, M. (2004). Critical analysis of various known methods for approximate reasoning in fuzzy logic control. In 5th International Symposium of Hungarian Researchers on Computational Intelligence. Budapest: Hungary. Takagi, T. (1985). Fuzzy identification of systems and its applications to modeling and control. IEEE Trans. on Systems, Man, and Cybernetics, 116–132. Takagi, T., & Sugeno, M. (1985). Fuzzy identification of systems and its applications to modeling and control. IEEE Trans Syst. Man, Cybernetics, 116–132. Theochaires, J. B. (2006). A high-order recurrent neuro-fuzzy system with internal dynamics: application to the adaptive noise cancellation. Fuzzy Sets and Systems, 157, 471–500. Wang, L. (1997). A Course in Fuzzy Systems and Control. NJ, USA: Prentice-Hall, Inc. Upper Saddle River.

Please cite this article as: G. Heydari et al., Chaotic time series prediction via artificial neural square fuzzy inference system, Expert Systems With Applications (2016), http://dx.doi.org/10.1016/j.eswa.2016.02.031

510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586

Q3

Q4

Q5

Q6

Q7