CMAC-based neuro-fuzzy approach for complex system modeling

CMAC-based neuro-fuzzy approach for complex system modeling

ARTICLE IN PRESS Neurocomputing 72 (2009) 1763–1774 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/loca...

1MB Sizes 0 Downloads 69 Views

ARTICLE IN PRESS Neurocomputing 72 (2009) 1763–1774

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

CMAC-based neuro-fuzzy approach for complex system modeling Kuo-Hsiang Cheng Mechanical and Systems Research Laboratories, Industrial Technology Research Institute, Chutung, Hsinchu 310, Taiwan, Republic of China

a r t i c l e in fo

abstract

Article history: Received 8 January 2007 Received in revised form 15 May 2008 Accepted 3 August 2008 Communicated by T. Heskes Available online 2 October 2008

A cerebellar model arithmetic computer (CMAC)-based neuron-fuzzy approach for accurate system modeling is proposed. The system design comprises the structure determination and the hybrid parameter learning. In the structure determination, the CMAC-based system constitution is used for structure initialization. With the advantage of generalization of CMAC, the initial receptive field constitution is formed in a systematic way. In the parameter learning, the random optimization algorithm (RO) is combined with the least square estimation (LSE) to train the parameters, where the premises and the consequences are updated by RO and LSE, respectively. With the hybrid learning algorithm, a compact and well-parameterized CMAC can be achieved for the required performance. The proposed work features the following salient properties: (1) good generalization for system initialization; (2) derivative-free parameter update; and (3) fast convergence. To demonstrate potentials of the proposed approach, examples of SISO nonlinear approximation, MISO time series identification/ prediction, and MIMO system mapping are conducted. Through the illustrations and numerical comparisons, the excellences of the proposed work can be observed. & 2008 Elsevier B.V. All rights reserved.

Keywords: Cerebellar model arithmetic computer network Fuzzy inference Hybrid learning System modeling Time series prediction

1. Introduction System modeling has been an important issue for modern engineering and successfully applied to a diverse range of areas such as time series forecasting, predictive control, expert system, signal processing, and system diagnosis, etc. [4,13,21,42]. Traditionally, two analytic models of principle model and experiment model are used. The principle model is obtained through physical and chemical laws. The experiment model is based on the input–output (I/O) data of the system such as autoregressive moving average (ARMA) model for linear systems and nonlinear autoregressive moving average (NARMA) model for nonlinear systems [9,14,15,33]. However, these approaches confront with the difficulties that the uncertainty flourishing in nature and engineering environments. Therefore, the traditional methodologies may fail to give satisfactory performance. With the associative memory network constituted by overlapping receptive fields, the cerebellar model arithmetic computer (CMAC) produces output in accordance with the input state vector in a table lookup fashion [1,2]. The input–output relationship is similar to the models human memory, where the local receptive region is expected to perform corresponding input–output (I/O) mapping. With the characteristics of fast learning, good generalization, and ease to implement by hardware, the CMAC has been applied to a wide range of applications [10,12,16,36]. For the

E-mail address: [email protected] 0925-2312/$ - see front matter & 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2008.08.011

traditional CMAC, the abilities of data storage and modeling are limited due to the use of local constant basis function [1,2,36]. The discrete memory of the CMAC structure is incapable in dealing with the ill-defined problems [3,36]. Thus, several approaches have been proposed to improve the performance by means of differentiable cells. For example, the CMAC networks with differentiable Gaussian functions are developed [3,5,7,11,17,26, 31,32,34,41]. In this case, the CMAC receives not binary values, but the basis function outputs with respect to the input vector. Recently, engineers and scientists focus their attention to integrate CMAC with the concept of linguistic IF–THEN rule representation. Based on an approximate reasoning, fuzzy inference does not require a mathematical model of the encountered system and possesses the potentiality to capture human experience and knowledge to handle complexity; thereby it circumvents the shortcomings of hard-computation [22,27,37]. In the design of intelligent learning, the network parameter optimization is achieved based on the gradient decent concept [7,17,19,28,29,31,32,36], reinforcement learning [11,18], or evolutionary strategy (ES) [35,38]. However, those methods have their inherent inferiorities. The gradient decent concept suffers from the learning slowness and the entrapment of local minima. With the advanced requirement for accurate modeling, gradient decent-based systems may unsuitable for fast system identification due to the sensitivity to initial condition. On the other hand, for the genetic algorithms (GAs) or evolution-based algorithms [18,35,38], it is required heavy computation resource and huge

ARTICLE IN PRESS 1764

K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

memory space to execute system evolution. To achieve local search and fast convergence, the length of the chromosomes must be given sufficiently for good resolution. Since each individual in the population should be evaluated to test its fitness in every learning epoch, the tradeoff between the population size and convergence speed is problem-dependent. Therefore, the capabilities of these learning algorithms are limited. In this paper, a cerebellar model arithmetic computer-based (CMAC) neuron-fuzzy approach is proposed. The design of the proposed neuro-fuzzy system comprises the structure determination and the hybrid parameter learning. In the structure determination, the CMAC-based constitution is exploited, where the Gaussian basis activation functions are embedded in the blocks on input spaces. With the generalization of CMAC, the initial structure is pre-designed in a systematic manner. Thus, the burden for system initialization is avoided. In the parameter learning phase, the random optimization (RO) [23–25] is combined with the least square estimation (LSE) to train the parameters of the CMAC-based approach, where the premises and the consequences are updated by RO and LSE, respectively. To achieve efficient parameter learning, the conditional LSE parameter update is introduced, where the consequences update is avoided if the current modeling error is acceptable. With the hybrid learning algorithm, compact and well-parameterized system is achieved to satisfy the required performance. With the characteristics of CMAC, the represented work features the following salient properties: (1) good generalization for system initialization; (2) derivative-free parameter update; and (3) fast convergence. To demonstrate potentials of the proposed approach, examples of SISO nonlinear approximation, MISO time series identification/prediction, and MIMO system mapping are conducted. Through the illustrations and numerical comparisons, the excellence of the CMAC-based approach can be observed. The results of accuracy and efficiency are represented as desired. The rest of this paper is organized as follows. In Section 2, the integration of the CMAC network and fuzzy inference process is introduced. In Section 3, the RO–LSE algorithm for parameter learning is represented. In Section 4, the validation of the

proposed CMAC neuron-fuzzy system is given. Finally, discussion and conclusion are conducted.

2. CMAC-based neuron-fuzzy system 2.1. Fundamentals of CMAC network CMAC is a kind of neural network that simulates the human cerebellum. Generally, the input–output mapping of CMAC can be viewed as a table lookup process [1,2,10,12,16], where the inputs are related to the output through an association mechanism. In the CMAC, each input dimension is quantized and divided into several discrete elements, and several elements are combined to form a block. By shifting each block with a finite interval, different combination of blocks can be obtained. For simplicity, a two-input CMAC constitution is shown in Fig. 1, where the number of discrete elements in each input space is denoted as n1 (n1 ¼ 8), and the maximum number of discrete elements in a block is denoted as n2 (n2 ¼ 4). The generated blocks for x1 is labeled with I, II, III,y,IX, and for x2 with i, ii, iii,y,ix, respectively. In the memory association, each block is occupied by an activation basis function to form a multi-dimensional receptive field labeled with Ii, IIii,y,IXix. Thus, the mapping vector is generated: B ¼ ½bIi bIIii    bIXix 

(1)

According to the input pattern, the elements in the mapping vector are given with 0s and 1s to indicate the connection from the input space to the association space. In Fig. 1, the input pattern [x1 x2] ¼ [4.5 4.5] activates the corresponding block IIii, Vv, and VIIIviii and the mapping vector is given with B ¼ [0 1 0 0 1 0 0 1 0]. Based on the obtained memory, the CMAC associates each activated receptive field to a corresponding physical memory to generate the output. With the processes of quantization, receptive field composition and correlation mapping, only a few receptive fields will be activated and contribute to the network output. In the parameter learning, the training algorithm only adjusts the weights corresponding to the activated fields. In such a way,

input

i iv vii

Physical memories

ii II ii v

x2 viii

II ii Vv Vv iii vi

VIII viii

ix

VIII viii

I

II IV VII

III VI

V VIII

IX

x1 Fig. 1. CMAC structure.

Σ

output

ARTICLE IN PRESS K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

appropriate internal mapping is built based on the interaction between the receptive fields and the associated physical memory. However, in the traditional CMAC, the physical memories are stimulated by binary value vector. It leads the limitation for input interpretation. To tackle the difficulty, the fuzzy inference operation is incorporated in the receptive field composition. Based on the concept of human-like information processing, the fuzzified mapping is embedded in the CMAC to circumvent the limitation of hard-computation. The description of CMAC-based neuro-fuzzy system is given in the following subsection. 2.2. CMAC-based neuron-fuzzy system The layered CMAC neuron-fuzzy system is illustrated in Fig. 2, where an MISO structure is represented. It comprises of the fuzzy activation function, the receptive field mapping, the normalization, the TSK-based polynomial memory, and the output layers. For simplicity, the crisp input vector X(N) in which comprises M input variables measured at N-th sampling, xi(N), i ¼ 1,2,y,M, is considered. XðNÞ ¼ ½x1 ðNÞ x2 ðNÞ . . . xM ðNÞT

(2)

T

where [  ] indicates the transpose of [  ]. Let the vector of fuzzy membership function l(X(N)) and vector of the linguistic term

1765

V(X(N)) be given as follows:

lðXðNÞÞ 0

l1 ðx1 ðNÞÞ 1

B C B l2 ðx2 ðNÞÞ C B C B C ¼B .. C B C . @ A lM ðxM ðNÞÞ 0 1 ½ m11 ðx1 ðNÞÞ m12 ðx1 ðNÞÞ    m1K ðx1 ðNÞÞ T B h iT C B C B C m ðx2 ðNÞÞ m22 ðx2 ðNÞÞ    m2K ðx2 ðNÞÞ 21 B C B C ¼B C .. B C B C . Bh iT C @ A mM1 ðxM ðNÞÞ mM2 ðxM ðNÞÞ    mMK ðxM ðNÞÞ and VðXðNÞÞ 0

1 v1 ðx1 ðNÞÞ B C B v2 ðx2 ðNÞÞ C B C C ¼B .. B C B C . @ A vM ðxM ðNÞÞ

x1

y1 (t) = haT a1

x2

y2 (t) = haT a2 Σ

Y (t)

yK (t) = haT aK

TSK-based Polynomial Memory Layer

xM Normalization Layer

Receptive Field Mapping Layer

Fuzzy Activation Function Layer Fig. 2. The proposed CMAC-based neuron-fuzzy system.

Output Layer

(3)

ARTICLE IN PRESS 1766

K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

0

½ v11 ðx1 ðNÞÞ

v12 ðx1 ðNÞÞ



v1K ðx1 ðNÞÞ T

1

B C B ½ v21 ðx2 ðNÞÞ v22 ðx2 ðNÞÞ    v2K ðx2 ðNÞÞ T C B C B C ¼B C . B C . B C . @ A T v ðx ðNÞÞ v ðx ðNÞÞ    v ðx ðNÞÞ ½ M1 M  MK M M2 M

In Eq. (8), the TSK-based memory set is collected to form the matrix A. (4)

where mik(xi(N)) and vik(xi(N)) are the k-th membership function and the linguistic term of the i-th linguistic input variable, respectively. In this CMAC-based neuro-fuzzy system, the linguistic terms can also be viewed as the block labels. For instance, ‘‘v11 ¼ I’’, ‘‘v12 ¼ II’’, and ‘‘v21 ¼ i’’, and so on. The relationship between the linguistic value set and the membership function set is expressed as SðXðNÞÞ ¼ VðXðNÞÞ  lðXðNÞÞ 9 8 ½s 8 s12 11 s1 ¼ v1  l1 > > > > > > > > > > > > > > > > > = > < s2 ¼ v2  l2 > < ½ s21 s22 ¼ ¼ . > > > .. > > > > > > > > > > > > > > ; > : > : ½ sM1 sM2 sM ¼ vM  lM

  .. . 

9 s1K T > > > > > > s2K T > = > > > > > > > T sMK  ;

(5)

where S(X(N)) is called the fuzzy set structure for the linguistic variable. The notation  is denoted as the major Cartesian product operator, by which the linguistic value set and its membership function set are associated. In the fuzzy activation function layer, each basis function on the corresponding input space performs a membership function and acts as a block. In this paper, the Gaussian function is adopted as the membership function. For the i-th input, the k-th activation function is given as ! ðx ðNÞ  mik Þ2 mik ðmik ; sik Þ ¼ exp  i (6) ðsik Þ2 where mik and sik are the mean and the variance of the Gaussian function according to the j-th block of the i-th input, which can be initialized by the location and the width of the corresponding block, respectively. In the proposed neuron-fuzzy approach, the constitution of the receptive field mapping layer can be represented as rule form. With the well-known fuzzy TSK model used for the physical memory layer, the k-th rule is depicted as follows: Rule k : If x1 is s1k and x2 is s2k . . . and xM is sMk THEN

T*

yk ðNÞ ¼ ha a k

(7) *

where ha ¼ ½1 x1 ðNÞ x2 ðNÞ    xM ðNÞT , a k ¼ ½a0k a1k    aMk T the parameter set of the k-th rule, and yk the output of the k-th rule. Based on Eq. (7), the CMAC output Y(N) is expressed as YðNÞ ¼

SKk¼1 bk  yk ðNÞ T* ¼ SKk¼1 b¯ k ha a k SKk¼1 bk

(8)

where bk is the k-th element of mapping vector in Eq. (1) and b¯ k is the normalized value of bk. With Eq. (6), the collection of premise sets of the CMAC-based neuro-fuzzy system are m ¼ ½m1 m2 . . . mM T

r ¼ ½r1 r2 . . . rM T mi ¼ ½mi1 mi2 . . . miK 

ri ¼ ½si1 si2 . . . siK T

(9) (10)

T

2

½a01 6 6 ½a02 6 A¼6 . 6 .. 4 ½a0K

a11



a12 .. .

 .. .

a1K



2* 3 a1 7 6 7 aM2 T 7 6 * 7 a 7 6 27 ¼6 . 7 .. 7 6 7 . 7 5 4 .. 5 * T aMK  aK aM1 T

3

(14)

Based on Eqs. (6)–(8), the output can be represented as the polynomial functions of the input signals, where each rule has its local interpretation of the input space and the system output is approximated locally by the corresponding hyper-planes. Thus, with the obtained training patterns, the optimization of the CMAC network can be viewed as the problem of numerical parameter estimation.

3. RO-based hybrid algorithm with conditional parameter update To train the proposed CMAC-based neuron-fuzzy system, the RO [23–25] is used together with the LSE for fast convergence. The RO algorithm features derivative-free and intuitive exploration in the parameter space. Moreover, the RO method excels not only at its simplicity and convenience, but also ensures to converge to the global minimum with probability one in a compact set [25]. With the distinctive advantage alleviating the design for system learning, the RO provides an alternative method for achieving computational intelligence. In this paper, the RO-based algorithm, which combined with the conditional LSE parameter update, is depicted as follows. Assume the output Y(N) of the proposed CMAC approach is denoted as fCMAC(W,A,X(N)), where the I/O mapping relationship can be describe by the system parameter set W, A, and X(N). The problem of system modeling can be stated as finding the optimal solution W, and A that minimizes the cost function defined as follows: EðW; AÞ ¼

Q 1X ðDðNÞ  f CMAC ðW; A; XðNÞÞ2 Þ1=2 Q N¼1

! (15)

where the root mean square error (RMSE) between the CMAC output and the desired outputs D(N) for the sampling from N ¼ 1 to Q. With the integration of RO–LSE, each candidate point generated by RO is viewed as a potential premise parameter solution. Based on Eq. (8), the relationship between the input vector X(N)and the desired output D(N) can be given as follows: DðNÞ ¼

K X

b¯ k ðNÞ  ða0k þ a1k x1 ðNÞ þ    þ aMk xM ðNÞÞ þ ðNÞ

(16)

k¼1

where e(N) is the modeling error. Let gk ðNÞ ¼ b¯ k ðNÞ½1 x1 ðNÞ x2 ðNÞ . . . xM ðNÞ T

¼ b¯ k ðNÞha ðNÞ

(17)

(11) (12)

GðNÞ ¼ ½g1 ðNÞ g2 ðNÞ . . . gK ðNÞT

where i ¼ 1,2,y,M. Thus, the collection of receptive basis parameters set W is obtained.

Eq. (16) can be represented as follows:

W ¼ ½m r

DðNÞ ¼ GT ðNÞA þ ðNÞ

(13)

(18)

(19)

ARTICLE IN PRESS K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

Assume there are Q training data pairs, ½XðNÞ DðNÞjN¼1;2;...;Q , to be identified, we have the following equation: A

f D B zfflfflfflfflffl}|fflfflfflfflffl{ ffl}|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl3{ zfflfflffl}|fflfflffl{ 2 3 zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl 2 2 ffl}|fflfflfflfflffl3{ 2 * 3 zfflfflfflffl g1 ð1Þ g2 ð1Þ    gK ð1Þ Dð1Þ ð1Þ a1 6 7 6 76 7 6 ð2Þ 7 6 Dð2Þ 7 6 g1 ð2Þ g2 ð2Þ    gK ð2Þ 7 6 * 6 7 7 a 6 7 6 76 2 7 6 7 6 .. 7 ¼ 6 .. .. 7 6 . 7 þ 6 .. 7 .. .. 6 . 7 6 . 7 6 7 6 7 . . . . . 4 5 4 54 . 5 4 5

DðQ Þ

g1 ðQ Þ

g2 ðQ Þ



gK ðQ Þ

* aK

(20)

1767

training data patterns is involved. Alternatively, when on-line training is required, the LSE can be implemented in recursive way, called recursive LSE (RLSE), where the update is conducted with individual training pattern. ! PðN  1ÞGðNÞGðNÞT PðN  1Þ PðNÞ ¼ PðN  1Þ  (22) 1 þ GðNÞT PðN  1ÞGðNÞ

ðQ Þ

˜ ˜ ˜  1ÞÞ AðNÞ ¼ AðN  1Þ þ PðNÞGðNÞðDðNÞ  GðNÞT AðN

˜ can be With the LSE, the optimal consequence parameter set, A, obtained so that f is minimized, given as follows: ˜ ¼ invðBT BÞBT D A

(21)

˜ is determined. The In Eq. (21), the consequence parameter set A detail of the hybrid algorithm is given in Appendix. However, the employment of the LSE implies that the system learning is primarily based on batch training, where the fixed number of

(23)

where Pð0Þ ¼ aI 2 <ððMþ1ÞKÞððMþ1ÞKÞ is given with a large value a, ˜ and Að0Þ can be initially set to zeros. In this paper, a criterion for parameter update is given for the purpose of computational efficiency. With the training pattern ½XðNÞ DðNÞ obtained at N-th sampling, the error term e(N) given in Eq. (16) is compared to a pre-given threshold T. If jðNÞjpT, the parameter update is ˜ ˜ omitted and AðNÞ ¼ AðN  1Þ. On the contrary, the parameter update is executed by Eqs. (22) and (23) for |e(N)|4T. Thus, unnecessary computation load can be avoided when the current system performance is acceptable.

Table 1 Parameter settings of the proposed work

Example 1 Example 2 Example 3

4. Simulation results

Algorithm

t_iteration

Z

T

RO–LSE RO–RLSE RO–RLSE

150 200 150

0.05 0.01 0.02

0.00001 0.00001

Examples of SISO, MISO, and MIMO systems are used to demonstrate the capability of the proposed work. With the pregiven training patterns, the system identification is executed to

Table 2 Initial structure of the proposed work Example 1

Blocks

x1(N) ¼ x(N)

s

m I II III IV Example 2

–1.6 0.4 –0.4 1.6

1.6 0.8 0.8 1.6

x1(N) ¼ x(N18) m I/i/A/a II/ii/B/b III/iiC/c IV/iv/D/d V/v/E/e VI/vi/F/f VII/vii/G/g VIII/viii/H/h IX/ix/I/i X/x/J/j XI/xi/K/k XII/xii/L/l

Example 3

0.5 1.4 0.6 1.5 0.7 1.3 0.7 1.25 1.0 1.1 0.8 1.25

x2(N) ¼ x(N12)

s

m

s

0.25 0.75 0.50 0.50 0.75 0.25 0.5 0.5 0.75 0.25 0.25 0.75

0.5 1.4 0.6 1.5 0.7 1.3 0.75 1.25 1.0 1.1 0.8 1.25

0.25 0.75 0.5 0.5 0.75 0.25 0.5 0.5 0.75 0.25 0.25 0.75

x1(N) ¼ x(N)

I/i II/ii III/iii IV/iv V/v VI/vi VII/vii VIII/viii IX/ix X/x XI/xi

x3(N) ¼ x(N6) m 0.5 1.4 0.6 1.5 0.7 1.3 0.75 1.25 1.0 1.1 0.8 1.25

x4(N) ¼ x(N)

s

m

s

0.25 0.75 0.5 0.5 0.75 0.25 0.5 0.5 0.75 0.25 0.25 0.75

0.5 1.4 0.6 1.5 0.7 1.3 0.75 1.25 1.0 1.1 0.8 1.25

0.25 0.75 0.5 0.5 0.75 0.25 0.5 0.5 0.75 0.25 0.25 0.75

x2(N) ¼ y(N)

m

s

m

s

–0.45 0.15 –0.39 0.21 –0.3 0.3 –0.21 0.39 –0.15 0.45 –0.45

0.9 0.3 0.9 0.6 0.6 0.6 0.6 0.9 0.3 0.9 0.9

–0.45 0.15 –0.39 0.21 –0.3 0.3 –0.21 0.39 –0.15 0.45 –0.45

0.9 0.3 0.9 0.6 0.6 0.6 0.6 0.9 0.3 0.9 0.9

ARTICLE IN PRESS

RMSE

1768

K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

Table 3 CMAC parameters for static function approximation (after learning)

0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

Blocks

x1(N) ¼ x(N) m

I II III IV

0

50

100

150

Training epochs I

II

III

IV

1 0.8 0.6

–1.3998 0.5676 –0.4211 1.6781

s 1.3750 1.1904 1.1227 1.4880

Consequent part ¼ a0,k+a1kx1(N) Receptive fields

a0k

a1k

I II III IV

–0.2891 2.1086 1.1841 –0.2616

–0.0680 2.0915 –5.7089 0.0617

0.4 follows [6,20,30,39]:

0.2 0 -4

-3

-2

-1

0 x

1

2

3

4

2.5 f (x)

2 1.5 1 Desired output Proposed work

0.5

4

-4

-3

-2

-1

0 x

1

2

3

4

x 10-3

Error

2

XðNÞ ¼ xðNÞ

(25)

DðNÞ ¼ f ðxðNÞÞ

(26)

where N ¼ 1 to 200. The learning curve of 150 training epochs and the trained Gaussian basis functions are given in Fig. 3(a–b). In the CMAC structure, there are four blocks labeled with I, II, III, IV, respectively. The system parameters are shown in Table 3 and the cost value of RMSE for the approximation is converged to 0.0017. Based on Eqs. (6), (7) and Table 3, the rules of the proposed CMAC can be given as follows: Rule Rule Rule Rule

0 -2 -4

(24)

The system training process is the same with [39], where 200 uniform random samplings between [4, 4] are used as the inputs to obtain the outputs. The data pairs extracted from Eq. (24) are expressed as:

3

0

 2 x f ðxÞ ¼ 1:1ð1  x þ 2x2 Þ exp  2

-4

-3

-2

-1

0 x

1

2

3

4

Fig. 3. Result of the proposed work for static function approximation: (a) learning curve, (b) the Gaussian basis functions (solid line: before training; dot line: after training), (c) identification performance, and (d) identification error.

describe the system to be identified. After sufficient learning, the learned CMAC-based neuron-fuzzy system is used to predict its future output with identified model. The initial settings for hybrid learning and the CMAC parameter settings are given in Tables 1 and 2, respectively. In Table 1, the parameters are chosen with some trials; the parameters Z are given with small values and the numbers of total learning iteration, t_iteration, should be given according to the desired performance. In Table 2, the initial settings of the receptive basis function are given by CMAC constitution; the means and variances are pre-defined according to the blocks on the input space. Example 1. SISO static function approximation The SISO CMAC-based neuron-fuzzy system is used to approximate the static Hermite polynomial, which is described as

1: 2: 3: 4:

IF IF IF IF

x1 x1 x1 x1

is is is is

m11 (1.3998, 1.3750) Then y1 ¼ 0.28910.068x1 m12 (0.5676, 1.1904) Then y2 ¼ 2.1086+2.0915x1 m13 (0.4211, 1.1227) Then y3 ¼ 1.18415.7089x1 m14 (1.6781, 1.4880) Then y4 ¼ 0.2616+0.0617x1

where x1 ¼ x(N) is the input and yk is the output of each rule, k ¼ 1,2,3,4. Thus, the linguistic knowledge for accurate approximation can be obtained through hybrid learning. The identification performance of the proposed CMAC and the error are illustrated in Fig. 3(c-d). The comparisons to the backpropagation-based CMAC [41], D-FNN [39], the OLS [6], RANEKF [20], M-RAN [30] are illustrated in Table 4, where CMAC provides more accurate approximation error with fewer neurons. Example 2. MISO Mackey–Glass Chaos Time Series identification The Mackey-Glass differential delay equation, which is a benchmark for system modeling, is given as follows [6,8,39]: x_ ðtÞ ¼

0:2xðt  tÞ  0:1xðtÞ 1 þ x10 ðt  tÞ

(27)

where tX17. The equation shows chaotic behavior and a higher dimensional chaos is observed with a higher value of t. The signal generation is based on the fourth-order Runge–Kutta method, where the initial conditions are given with x(0) ¼ 1.2, t ¼ 17, and x(t) ¼ 0 for to0. For time series identification and predicting, the relationship of the I/O is described as xðt þ 6Þ ¼ f ðxðt  18Þ; xðt  12Þ; xðt  6Þ; xðtÞÞ

(28)

ARTICLE IN PRESS K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

Table 4 Performance comparison of the proposed work to other approaches for static function approximation

4.2 RMSE

Number of neurons

RMSE

Back-propagation-based CMAC [17]

4 (blocks)

_

4

_

4

_

4

_

4

_

4

_

4

D-FNN [39] OLS [6] RANEKF [20] M0RAN [30] Proposed work

6 7 13 7 4

0.03430 (with 100 training epochs) 0.02893 (with 200 training epochs) 0.02595 (with 300 training epochs) 0.02383 (with 400 training epochs) 0.02220 (with 500 training epochs) 0.01736 (with 1000 training epochs) 0.01238 (with 2000 training epochs) 0.0056 0.0095 0.0262 0.009 0.0017

x 10-3

4.4 4 3.8 3.6 3.4

5

10

15

20 25 30 Training epochs

35

40

0

100

200

300

400 500 600 Sampling

700

800

900 1000

0

100

200

300

400 500 600 Sampling

700

800

900 1000

0

100

200

300

400 500 600 Sampling

700

800

900 1000

0

100

200

300

400 500 600 Sampling

700

800

900 1000

0

45

50

1.4 1.2 Output

Method

1769

1 0.8 0.6 0.4

0.02 Thus, the data pairs extracted from the series are given as

DðNÞ ¼ xðN þ 6Þ

(30)

In Eq. (30), 2000 data pairs from N ¼ 124 to 2123 are generated. The first 1000 data points from N ¼ 124 to 1123 are collected as the training patterns. After training, the rest 1000 data points from N ¼ 1124 to 2123 are used for prediction. The learning curve is given in Fig. 4(a). After 200 training epochs, the system parameters are given in Table 5. The rules of the proposed CMAC can be given as follows: Rule 1: IF x1 is m11 (0.5260, 0.2748) and x2 is m21 (0.4849, 0.2424) and x3 is m31 (0.5084, 0.2018) and x4 is m41 (0.4655, 0.2603) Then y1 ¼ 44.9299+26.4104x11.8x213.8572x3+45.8608x4 Rule 2: IF x1 is m12 (1.4010, 0.7825) and x2 is m22 (1.4242, 0.7310) and x3 is m32 (1.3980, 0.7701) and x4 is m42 (1.4463, 0.7354) Then y2 ¼ 834.4849+29.4245x1+141.1654x2+214.8678x373.2779x4 Rule 3: IF x1 is m13 (0.6133, 0.4861) and x2 is m23 (0.6137, 0.5121) and x3 is m33 (0.5654, 0.4867) and x4 is m43 (0.621, 0.4899) Then y3 ¼ 9.3773+6.9134x1+14.0501x2+17.7883x37.9695x4 Rule 4: IF x1 is m14 (1.4905, 0.535) and x2 is m24 (1.476, 0.5176) and x3 is m34 (1.5032, 0.4788) and x4 is m44 (1.5144, 0.489) Then y4 ¼ 57.339755.3359x1+19.717x21.5293x3+85.5651x4 Rule 5: IF x1 is m15 (0.7026, 0.7295) and x2 is m25 (0.6622, 0.758) and x3 is m35 (0.6636, 0.7622) and x4 is m45 (0.7508, 0.787) Then y5 ¼ 12.742515.7360x118.9063x229.9787x3+2.636x4 Rule 6: IF x1 is m16 (1.3137, 0.2170) and x2 is m26 (1.2904, 0.2549) and x3 is m36 (1.3018, 0.2496) and x4 is m46 (1.2701, 0.2731) Then y6 ¼ 112.5867+70.3187x1+13.679x27.1759x3+26.1357x4 Rule 7: IF x1 is m17 (0.7605, 0.4896) and x2 is m27 (0.7655, 0.4999) and x3 is m37 (0.7207, 0.4467) and x4 is m47 (0.7729, 0.5047) Then y7 ¼ 26.8855+7.6335x1+0.0988x23.201x3+25.1167x4 Rule 8: IF x1 is m18 (1.2855, 0.4639) and x2 is m28 (1.216, 0.4855) and x3 is m38 (1.2536, 0.4576) and x4 is m48 (1.2619, 0.5291) Then y8 ¼ 24.1375+9.183x1+3.9464x2+9.9124x3+8.4884x4 Rule 9: IF x1 is m19 (1.0248, 0.7294) and x2 is m29 (1.0066, 0.7531) and x3 is m39 (0.9741, 0.7367) and x4 is m49 (0.9954, 0.7616) Then y9 ¼ 200.736854.0272x145.6618x2+9.6188x3112.8197x4 Rule 10: IF x1 is m110 (1.1166, 0.2576) and x2 is m210 (1.1165, 0.2468) and x3 is m310 (1.119, 0.2413) and x4 is m410 (1.0771, 0.2284) Then y10 ¼ 7.92107.861x1+0.7702x2+3.3119x33.3256x4 Rule 11: IF x1 is m111 (0.8179, 0.2518) and x2 is m211 (0.7863, 0.2749) and x3 is m311 (0.7913, 0.2569) and x4 is m411 (0.8175, 0.2778) Then y11 ¼ 4.2272+2.7384x1+3.5211x22.8403x3+1.8118x4 Rule 12: IF x1 is m112 (1.2724, 0.786) and x2 is m212 (1.2367, 0.7386) and x3 is m312 (1.2143, 0.7712) and x4 is m412 (1.2448, 0.7626) Then y12 ¼ 221.7401+95.4640x1+99.4774x21.50772x3+308.3296x4

0 -0.01 -0.02

1.4 1.2 Output

(29)

1 0.8 0.6 0.4

0.02 0.01 Error

XðNÞ ¼ ½xðN  18Þ xðN  12Þ xðN  6Þ xðNÞ

Error

0.01

0 -0.01 -0.02

Fig. 4. The simulation results of the proposed work for Mackey–Glass time series with N ¼ 124–2123 (a) learning curve, (b) the desired data to be identified (dot line) and the output of CMAC (solid line), (c) the identification error, (d) the desired data to be predicted (dot line) and the output (solid line), and (e) the prediction error.

The identification performance of the CMAC and the identification error are illustrated in Fig. 4b andc and the training phase for the identification RMSE is converged to 0.0035. With the determined system parameters, the prediction performance of the

ARTICLE IN PRESS 1770

K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

CMAC and the prediction error are illustrated in Fig. 4d and e.The testing phase for the prediction RMSE is given as 0.0036. The testing phase for the prediction RMSE is given as 0.0036. The comparisons to the approaches of [6,8,39,41] are illustrated in Table 6, where CMAC provides more accurate approximation.

Table 5 Parameters for the Mackey–Glass chaos time series with N ¼ 1242123 (after learning) Blocks

I/i II/ii III/iii IV/iv V/v VI/vi VII/vii VIII/viii IX/ix X/x XI/xi XII/xii

x1(N) ¼ x(N18)

x2(N) ¼ x(N12)

A/a B/b C/c D/d E/e F/f G/g H/h I/i J/j K/k L/l

xðN þ 1Þ ¼ 0:85 þ 0:9ðxðNÞ cosðuÞ  yðNÞ sinðuÞÞ

(31)

yðN þ 1Þ ¼ 0:9ðyðNÞ cosðuÞ þ xðNÞ sinðuÞÞ

(32)

where u ¼ 0:4 

m

s

m

s

0.5260 1.4010 0.6133 1.4905 0.7026 1.3137 0.7605 1.2855 1.0248 1.1166 0.8179 1.2724

0.2748 0.7825 0.4861 0.5350 0.7295 0.2170 0.4896 0.4639 0.7294 0.2576 0.2518 0.7860

0.4849 1.4242 0.6137 1.4760 0.6622 1.2904 0.7655 1.2160 1.0066 1.1165 0.7863 1.2367

0.2424 0.7310 0.5121 0.5176 0.7580 0.2549 0.4999 0.4855 0.7531 0.2468 0.2749 0.7386

x3(N) ¼ x(N6)

Example 3. MIMO Ikeda mapping A two-dimensional dynamic system, where the strange attractors are observed, is used for the problem of MIMO system mapping [40]. The data pairs are generated by the following equations:

5:5

(33)

1 þ xðNÞ2 þ yðNÞ2

The initial conditions are given with x(0) ¼ y(0) ¼ 0. Based on Eqs. (31)–(33), the dynamic system is illustrated in Fig. 5. In this simulation, 500 examples are used. The first 300 were employed as training examples are the rest 200 are used for testing: " # " # xðNÞ x1 ðNÞ ¼ (34) XðNÞ ¼ yðNÞ x2 ðNÞ " DðNÞ ¼

Dx ðNÞ Dy ðNÞ

#

" ¼

xðN þ 1Þ yðN þ 1Þ

# (35)

x4(N) ¼ x(N)

m

s

m

s

0.5084 1.3980 0.5654 1.5032 0.6636 1.3018 0.7207 1.2536 0.9741 1.1190 0.7913 1.2143

0.2108 0.7701 0.4867 0.4788 0.7622 0.2496 0.4467 0.4576 0.7367 0.2413 0.2569 0.7712

0.4655 1.4463 0.6210 1.5144 0.7508 1.2701 0.7729 1.2619 0.9954 1.0771 0.8175 1.2448

0.2603 0.7354 0.4899 0.4890 0.7870 0.2731 0.5047 0.5291 0.7616 0.2284 0.2778 0.7626

Consequent part ¼ a0k+a1kx1(N)+a2kx2(N)+a3kx3(N)+a4kx4(N) Receptive fields

a0k

a1k

a2k

a3k

a4k

IiAa IIiiBb IIIiiiCc IVivDd VvEe VIviFf VIIviiGg VIIIviiHh IXixIi XxJj XIxiKk XIIxiiLl

–44.9299 –834.4849 –9.3773 –57.3397 12.7425 –112.5867 –26.8855 –24.1375 200.7368 7.9210 –4.2272 –221.7401

26.4104 29.4245 6.9134 –55.3359 –15.7360 70.3187 7.6335 9.1830 –54.0272 –7.8610 2.7384 95.4694

–1.8000 141.1654 14.0501 19.7170 –18.9063 13.6790 0.0988 3.9464 –45.6618 0.7702 3.5211 99.4774

–13.8572 214.8678 17.7883 –1.5293 –29.9787 –7.1759 –3.2010 9.9124 9.6188 3.3119 –2.8403 –15.0772

45.8608 –73.2779 –7.9695 85.5651 2.6360 26.1357 25.1167 8.4884 –112.8197 –3.3256 1.8118 308.3296

In the MIMO system modeling, the RO is combined with the RLSE. Thus, the parameter update can be implemented with individual training pattern: ! PðN  1ÞGðNÞGðNÞT PðN  1Þ (36) PðNÞ ¼ PðN  1Þ  1 þ GðNÞT PðN  1ÞGðNÞ ˜ i ðN  1Þ þ PðNÞGðNÞðDi ðNÞ  GðNÞT A ˜ i ðN  1ÞÞ ˜ i ðNÞ ¼ A A

(37)

where iAx, y. The learning curve of 150 training epochs and the trained Gaussian basis functions are given in Fig. 6a–c. In the CMAC structure, there are twelve blocks for each input, respectively. The system parameters are shown in Table 7. For the CMAC neuron-fuzzy system, the inputs are the x(N) and y(N) and the outputs are x(N+1) and y(N+1). The rules of the proposed CMAC can be given as follows: Rule 1: IF x1 is m11 (0.4294, 0.9559) and x2 is m21 (0.5681, 0.8826) Then y11 ¼ 45.0862+10.0788x1+12.0482x2 and y21 ¼ 0.9166+1.2072x1+0.8477x2 Rule 2: IF x1 is m12 (0.3092, 0.877) and x2 is m22 (0.5372, 0.9183) Then y12 ¼ 0.3039+3.7545x1+2.5516x2 and y22 ¼ 3.1045+1.6395x1+3.8446x2 Rule 3: IF x1 is m13 (0.3606, 0.6339) and x2 is m23 (0.3135, 0.6369) Then y13 ¼ 17.4959+16.3949x1+3.6522x2 and y23 ¼ 4.1992.996x1+1.6875x2 Rule 4: IF x1 is m14 (0.1056, 0.5861) and x2 is m24 (0.2155, 0.5535) Then y14 ¼ 10.08954.1245x114.3522x2 and y24 ¼ 5.6141+7.3718x12.106x2 Rule 5: IF x1 is m15 (0.1316, 0.3415) and x2 is m25 (0.024, 0.3421) Then y15 ¼ 28.0454+3.1564x15.5384x2 and y25 ¼ 9.46187.6922x1+7.1225x2 Rule 6: IF x1 is m16 (0.0379, 0.2375) and x2 is m26 (0.0163, 0.2841) Then y16 ¼ 3.49622.2462x1+3.6344x2 and y26 ¼ 1.6671.3112x12.8174x2

Table 6 Performance comparison of the proposed work to other approaches for the Mackey–Glass chaos time series with N ¼ 1242123 Method

RMSEtraining

RMSEtesting

Rules

Back–propagation–based CMAC [17] _ _ _ _ D-FNN [39] OLS [6] RBF–AFS [8] Proposed work

0.0140 0.0109 0.0083 0.0067 0.0044 0.0132 0.0158 0.0107 0.0035

0.0171 0.0101 0.0074 0.0059 0.0045 0.0131 0.0163 0.0128 0.0036

12 12 12 12 12 5 13 21 12

(blocks) (blocks) (blocks) (blocks) (blocks)

(blocks)

(with (with (with (with (with

200 training epochs) 500 training epochs) 1000 training epochs) 2000 training epochs) 5000 training epochs)

ARTICLE IN PRESS K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

1

Table 7 Parameters for Ikeda mapping (after learning) Blocks

0.5

I/i II/ii III/iii IV/iv V/v VI/vi VII/vii VIII/viii IX/ix X/x XI/xi

y (N)

0

-0.5

-1

-1.5 -0.5

0

0.5 x (N)

1

1.5

Fig. 5. Ikeda mapping.

4

x10-3

3 RMSE

1771

2 1 0 0

50

100

150

Training epochs

1

x1(N) ¼ x(N)

x2(N) ¼ y(N)

m

s

m

s

0.4294 0.3092 0.3606 0.1056 0.1316 –0.0379 –0.0321 –0.1816 –0.1887 –0.3405 –0.5572

0.9559 0.8770 0.6339 0.5861 0.3415 0.2375 0.3502 0.5594 0.6070 0.8898 0.7887

0.5681 0.5372 0.3135 0.2155 0.0240 0.0163 –0.1422 –0.2197 –0.2759 –0.3983 –0.5197

0.8826 0.9183 0.6369 0.5535 0.3421 0.2841 0.3486 0.5841 0.7166 0.8466 0.8440

Receptive fields (x-dimension)

a0k

a1k

a2k

Ii IIii IIIiii IViv Vv VIvi VIIvii VIIIviii IXix Xx XIxi

45.0862 0.3039 17.4959 10.0895 28.0454 3.4962 17.0805 33.5772 0.2022 41.7722 1.2686

10.0788 3.7545 16.3949 4.1245 3.1564 2.2462 4.0826 4.5423 7.2977 15.4902 0.9371

12.0482 2.5516 3.6522 14.3522 5.5384 3.6344 12.7457 3.0921 0.4596 2.5002 1.4097

Receptive fields (y-dimension)

a0k

a1k

a2k

Ii IIii IIIiii IViv Vv VIvi VIIvii VIIIviii IXix Xx XIxi

0.9166 3.1045 4.1990 5.6141 9.4618 1.6670 12.0875 8.4318 2.5715 16.2498 0.2459

1.2072 1.6395 2.9960 7.3718 7.6922 1.3112 12.4252 2.1591 2.2874 4.8347 0.2429

0.8477 3.8446 1.6875 2.1060 7.1225 2.8174 11.5154 2.9176 7.7168 0.7121 1.6536

0.5

0 -1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

x (N) 1

0.5

0 -1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

y (N) Fig. 6. The simulation results of the proposed work for Ikeda mapping: (a) learning curve, (b) the trained Gaussian basis functions in x-dimension, and (c) the trained Gaussian basis functions in y-dimension.

Rule 7: IF x1 is m17 (0.0321, 0.3502) and x2 is m27 (0.1422, 0.3486) Then y17 ¼ 17.08054.0826x112.7457x2 and y27 ¼ 12.0875+12.4252x1 11.5154x2 Rule 8: IF x1 is m18 (0.1816, 0.5594) and x2 is m28 (0.2197, 0.5841) Then y18 ¼ 33.5772+4.5423x1+3.0921x2 and y28 ¼ 8.4318+2.1591x1+2.9176x2

Rule 9: IF x1 is m19 (0.1887, 0.607) and x2 is m29 (0.2759, 0.7166) Then y19 ¼ 0.2022+7.2977x10.4596x2 and y29 ¼ 2.5715+2.2874x1+7.7168x2 Rule 10: IF x1 is m110 (0.3405, 0.8898) and x2 is m210 (0.3983, 0.8466) Then y110 ¼ 41.772215.4902x12.5002x2 and y210 ¼ 16.2498+4.8347x1+ 0.7121x2 Rule 11: IF x1 is m111 (0.5572, 0.7887) and x2 is m211 (0.5197, 0.844) Then y111 ¼ 1.2686+0.9371x11.4097x2 and y211 ¼ 0.24590.2429x1+1.6536x2

The cost value of RMSE for the identification is 0.0006 and 0.0007 for the prediction. The performance of the proposed CMAC and the error are illustrated in Fig. 7a–d. The output of the proposed CMAC is shown in Fig. 8. The performance comparisons to the neural network and the fuzzy-neural network are illustrated in Table 8. In Table 8, the neural network and the fuzzy-neuron network are trained with gradient-based algorithms, where 36 and 32 hidden nodes are employed for approximation the obtained training data. However, 30,000 training epochs are needed for achieving satisfying performance. The drawback of slowness in learning is observed. Through Figs. 6 and 7 and Table 8, the advanced accuracy and speed show the excellence of the proposed work. Through the results and the performance comparisons, the proposed approach shows excellent capability for nonlinear identification and prediction.

ARTICLE IN PRESS 1772

K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

Table 8 Performance comparison of the proposed work to other approaches for the Ikeda mapping

0.01

Error

0.005 0 -0.005 identification

-0.01

prediction

-0.015 0

50

100

150

200 250 300 Sampling

350

400

450

500

Method

RMSEtraining

RMSEtesting

Back–propagation-based CMAC [17] (with 1000 training epochs) (with 2000 training epochs) (with 5000 training epochs) (with 10000 training epochs) Back propagation network Fuzzy-neuro network Proposed work

0.02526

0.03049

0.01830 0.01293 0.01083 0.00524 0.00131 0.0006

0.02130 0.01593 0.01495 0.00553 0.00203 0.0007

1

y (N)

0 -1

prediction

identification

-2 0

50

100

150

200 250 300 Sampling

350

400

450

500

0.02

Error

0.01 0 identification

prediction

-0.01 0

50

100

150

200 250 300 Sampling

400

350

450

500

Fig. 7. The performance of the proposed work for Ikeda mapping: (a) identification and prediction for x-dimension, (b) identification and prediction error for x-dimension, (c) identification and prediction for y-dimension, and (d) identification and prediction error for y-dimension.

1

0.5

y (N)

0

-0.5

-1

-1.5 -0.5

0

0.5 x (N)

1

1.5

observed, where the initial parameters of means and variances are given according to the center and width of blocks. Thus, the proposed CMAC neuron-fuzzy system executes input space partitioning with efficiency. With the obtained training data the proposed approach can achieve better performance after learning. The derivative-free method is a suitable manner for most realworld optimization, where the interaction in the system is often unknown. The RO–LSE algorithm, in which the antecedent and consequence parameters are evolved respectively, takes the advantages of derivative-free optimization and fast convergence. The optimization of RO involves the random number generation, interpolation point search, fitness evaluation. The interpolation check increases the chance of finding better solution for optimization. RO-based learning can continuously improve its performance without the derivative information, purely based on the evaluated fitness. Thus, it is easy to understand and is convenient to adapt to specific applications with the fitness function is defined. In this paper, the RO and LSE is used in hybrid way to train the NFS and the SISO, MISO, MIMO systems modeling are conducted. The simulations are executed with a personal computer with Intel 2.13 GHz CPU inside. The CPU time for each training epoch is 0.0375 s for Example 1, 0.7093 s for Example 2, and 0.1563 s for Example 3. For the BP-based approaches, the performance of BP-based algorithms possesses the advantage of speed with a slight improvement (0.0188 s for Example 1, 0.3250 s for Example 2, and 0.1219 s for Example 3). However, with Tables 4–6, the RO-based algorithm can approximate the obtained patterns with better accuracy and less learning epochs. Through the simulation results, the performance RO-based hybrid learning is moderate for system modeling. The proposed CMAC-based neuron-fuzzy approach for has been successfully applied to the problem of system modeling. The ability of generalization is shown with the CMAC network structure. With the RO–LSE hybrid learning and conditional update mechanism, the optimal system parameters are machine-learned to capture the essence of I/O information while computation complexity is avoided. To validate the modeling performance, the proposed work is applied to SISO nonlinear function approximation, MISO chaotic time series identification/ prediction, and MIMO dynamic system mapping. Through the simulation results and performance comparison with other approaches, the feasibility and excellence of the proposed CMAC-based neuron-fuzzy system are observed.

Fig. 8. The output of CMAC-based neuron-fuzzy system for Ikeda mapping.

5. Discussion and conclusion

Appendix

Functionally, the trained CMAC performs multivariable function approximation in a generalized look-up table manner. With the processes of quantization and block-shifting, the initial receptive field composition is generated systematically. The characteristic of CMAC alleviates the design process for initializing the neuro-fuzzy based system. In Table 2, the ability of CMAC generalization can be

The hybrid learning algorithm for CMAC-based neuro-fuzzy system is given as follows: Step 1: Set EC ¼ 0 (EC is the learning epoch counter). Set t_iteration ¼ some integer given as the number of learning epochs.

ARTICLE IN PRESS K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774 Set qðECÞ ¼ 0, initial mean vector of all Gaussian random vectors set to zeros initially. Set values for variances set Z. Set initial set of premise parameters W(EC). Set Q ¼ number of the training data pairs: fðXðNÞ; DðNÞÞ; N ¼ 1; 2; . . . ; Q g.

Step 2:

go to step 6. else if oan4oo, then WðECþ1Þ ¼ Wan

qðECþ1Þ ¼ fqðECÞ  0:4nðECÞ ; ðECÞ

Generate n

ðECÞ

If WðECÞ n

, the Gaussian random vector with variance Z 2 <, generate the candidates

Wo ¼ WðECÞ , Wp ¼ WðECÞ þ n

ðECÞ

1773

where  1oao0

qðECþ1Þ ¼ 0:2fqðECÞ þ 0:4nðECÞ ;

where 0oao1

go to step 6. Step 6:

,

ðECÞ

Wn ¼ WðECÞ  n , then go to step 3 else go to step 2.

Step 3: Calculate Bo, Bp, and Bn (the matrix B is defined in Eq. (20)).

If EC4maximum training epoch or the value of cost function is acceptable for application, then stop. else EC ¼ EC+1 go to step 2.

˜ o ¼ invðBT Bo ÞBT D A o o

References

˜ p ¼ invðBT Bp ÞBT D A p p

[1] J.S. Albus, A new approach to manipulator control: the cerebellar model articulation controller, Trans. ASME J. Dyn. Syst. Meas. Control (1975) 220–227. [2] J.S. Albus, Data storage in the cerebellar model articulation controller, Trans. ASME J. Dyn. Syst. Meas. Control 12 (1975) 228–233. [3] P.E.M. Almeida, M.G. Simoes, Parametric CMAC networks: fundamentals and applications of a fast convergence neural structure, IEEE Trans. Ind. Appl. 39 (5) (2003) 1551–1557. [4] H. Casdagli, S. Eubank, Nonlinear Modeling and Forecasting, Addison-Wesley, Reading, MA, 1992. [5] J.-Y. Chen, P.-S. Tsai, C.-C. Wong, Adaptive design of a fuzzy cerebellar model arithmetic controller neural network, IEE Proc. Control Theory Appl. 152 (2) (2005) 133–137. [6] S. Chen, C.F.N. Cowan, P.M. Grant, Orthogonal least squares learning algorithm for radial basis function network, IEEE Trans. Neural Netw. 2 (2) (1991) 302–309. [7] C.-T. Chiang, C.-S. Lin, Integration of CMAC and radial basis function techniques, IEEE Int. Conf. Syst. Man Cybern. 4 (1995) 3263–3268. [8] K.B. Cho, B.H. Wang, Radial basis function based adaptive fuzzy systems and their applications to system identification and prediction, Fuzzy Sets and Syst. 83 (3) (1996) 325–339. [9] E.H.K. Fung, Y.K. Wong, H.F. Ho, M.P. Mignolet, Modelling and prediction of machining errors using ARMAX and NARMAX structures, Appl. Math. Model. 27 (8) (2003) 611–627. [10] Z.J. Geng, C.L. Mccullough, Missile control using fuzzy CMAC neural networks, AIAA J. Guid. Control Dyn. 20 (3) (1997). [11] P.Y. Glorennec, Neuro-fuzzy control using reinforcement learning, Int. Conf. Syst. Man Cybern. 4 (1993) 91–96. [12] D.P.W. Graham, G.M.T. D’Eleuterio, Robotic control using a modular architecture of cooperative artificial neural networks, in: Proceedings of the International Conference on Artificial Neural Network (ICANN’91), 1991, pp. 365–370. [13] A. Guille´n, J. Gonza´lez, I. Rojas, H. Pomares, L.J. Herrera, O. Valenzuela, A. Prieto, Using fuzzy logic to improve a clustering technique for function approximation, Neurocomputing 70 (16–18) (2007) 2853–2860. [14] S.J. Huang, K.R. Shih, Short-term load forecasting via ARMA model identification including non-Gaussian process considerations, IEEE Trans. Power Syst. 18 (2) (2003) 673–679. [15] C.M. Huang, C.J. Huang, M. Li Wang, A particle swarm optimization to identifying the ARMAX model for short-term load forecasting, IEEE Trans. Power Syst. 20 (2) (2005) 1126–1133. [16] K.S. Hwang, C.S. Lin, Smooth trajectory tracking of three-link robot: a selforganizing CMAC approach, IEEE Trans. Syst. Man. Cybern. Part B: Cybern. 28 (1998) 680–692. [17] C.-C. Jou, A fuzzy cerebellar model articulation controller, in: IEEE International Conference on Fuzzy Systems, March 1992, pp. 1171–1178. [18] C.F. Juang, Combination of online clustering and Q-value based GA for reinforcement fuzzy system design, IEEE Trans. Fuzzy Syst. 13 (3) (2005) 289–302. [19] C.F. Juang, C.T. Lin, An online self-constructing neural fuzzy inference network and its applications, IEEE Trans. Fuzzy Syst. 6 (1) (1998) 12–32. [20] V. Kadirkamanathan, M. Niranjan, A function estimation approach to sequential learning with neural networks, Neural Comput. 5 (1993) 954–975. [21] G. Kechriotis, E. Zervs, E.S. Manolakos, Using recurrent neural networks for adaptive communication channel equalization, IEEE Trans. Neural Netw. 15 (2) (1994) 267–278. [22] B. Kosko, Fuzzy Engineering, Prentice-Hall, Englewood Cliffs, NJ, 1997. [23] C. Li, C.Y. Lee, Self-organizing neuro-fuzzy system for control of unknown plants, IEEE Trans. Fuzzy Syst. 11 (1) (2003) 135–150. [24] C. Li, C.-Y. Lee, K.-H. Cheng, Pseudoerror-based self-organizing neuro-fuzzy system, IEEE Trans. Fuzzy Syst. 12 (6) (2004) 812–819.

˜ n ¼ invðBT Bn ÞBT D A n n Find the cost values below: (E(  ) is defined in Eq. (15))

oo ¼ EðWo ; A˜ o Þ op ¼ EðWP ; A˜ p Þ on ¼ EðWn ; A˜ n Þ Step 4: (a) If op is minimum, then let W(EC+1) ¼ Wp

qðECþ1Þ ¼ 0:4nðECÞ þ 0:2qðECÞ go to step 6. (b) If on is minimum, then let W(EC+1) ¼ Wn

qðECþ1Þ ¼ 0:4nðECÞ  0:2qðECÞ go to step 6. (c) If oo is minimum, go to step 5. Step 5: ðon op Þ Compute f ¼ ðon þo , find the interpolation candip 2oo Þ dates: ðECÞ

Wap ¼ Wo þ fn

ðECÞ

Wan ¼ Wo  fn

˜ ap , A ˜ an Find Bap, Ban, and A ˜ ap ¼ invðBT Bap ÞBT D A ap ap ˜ an ¼ invðBT Ban ÞBT D A an an Calculate the cost values:

oap ¼ EðWap ; A˜ ap Þ oan ¼ EðWan ; A˜ an Þ If oapooo, then WðECþ1Þ ¼ Wap

qðECþ1Þ ¼ fqðECÞ  0:4nðECÞ ;

where  1oao0

qðECþ1Þ ¼ 0:2fqðECÞ þ 0:4nðECÞ ;

where 0oao1

ARTICLE IN PRESS 1774

K.-H. Cheng / Neurocomputing 72 (2009) 1763–1774

[25] C. Li, R. Priemer, K.H. Cheng, Optimization by random search with jumps, Int. J. Numer. Methods Eng. 60 (2004) 1301–1315. [26] L. Li, C. Hou, The study of application of FCMAC neural network in the industrial process on-line identification and optimization, in: Proceedings of the International Conference on Neural Information Processing (ICONIP ‘02), vol. 4, 2002, pp. 1734–1738. [27] C.-J. Lin, H.-J. Chen, C.-Y. Lee, A self-organizing recurrent fuzzy CMAC model for dynamic system identification, IEEE Int. Conf. Fuzzy Syst. 2 (2004) 697–702. [28] C.T. Lin, C.S.G. Lee, Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems, Prentice-Hall, Englewood Cliffs, NJ, 1996. [29] F.J. Lin, C.H. Lin, P.H. Shen., Self-constructing fuzzy neural network speed controller for permanent-magnet synchronous motor drive, IEEE Trans. Fuzzy Syst. 9 (5) (2001) 751–759. [30] Y. Lu, N. Sundararajan, P. Saratchandran, A sequential learning scheme for function approximation by using minimal radial basis function networks, Neural Comput. 8 (1997) 461–478. [31] M.N. Nguyen, D. Shi, C. Quek, Self-organizing Gaussian fuzzy CMAC with truth value restriction, in: Third International Conference on Information Technology and Applications (ICITA 2005), vol. 2, 2005, pp. 185–190. [32] Y.-F. Peng, C.-M. Lin, Adaptive recurrent cerebellar model articulation controller for linear ultrasonic motor with optimal learning rates, Neurocomputing 70 (16–18) (2007) 2626–2637. [33] A.E. Ruano, P.J. Fleming, C. Teixeira, K. Rodrı´guez-Va´zquez, C.M. Fonseca, Nonlinear identification of aircraft gas-turbine dynamics, Neurocomputing 55 (3–4) (2003) 551–579. [34] Z. Shen, C. Guo, H. Li, General fuzzified CMAC-based model reference adaptive control for ship steering, in: Proceedings of the IEEE International Symposium on Intelligence Control, June 2005, pp. 1257–1262. [35] M.C. Su, H.T. Chang, Application of neural networks incorporated with realvalued genetic algorithms in knowledge acquisition, Fuzzy Sets and Syst. 112 (2000) 85–97.

[36] D.E. Thompson, S. Kwon, Neighborhood sequential and random training techniques for CMAC, IEEE Trans. Neural Netw. 6 (1) (1995) 196–202. [37] L.X. Wang, J.M. Mendel, Fuzzy basis functions, universal approximation, and orthogonal least squares learning, IEEE Trans. Neural Netw. 3 (5) (1992) 807–814. [38] C.C. Wong, C.C. Chen, A GA-based method for constructing fuzzy systems directly from numerical data, IEEE Trans. Syst. Man. Cybern. Part B: Cybern. 30 (6) (2000) 904–911. [39] S. Wu, M.J. Er, Dynamic fuzzy neural networks—a novel approach to function approximation, IEEE Trans. Syst. Man. Cybern. Part B: Cybern. 30 (2) (2000) 358–364. [40] I.-C. Yeh, Modeling chaotic two-dimensional mapping with fuzzy-neuron networks, Fuzzy Sets Syst. 105 (3) (1999) 421–427. [41] K. Zhang, F. Qian, Fuzzy CMAC and its application, in: Proceedings of the 3rd World Congress on Intelligent Control and Automation, vol. 2, June–July 2000, pp. 944–947. [42] Z. Zhu, H. Leung, Identification of linear systems driven by chaotic signals using nonlinear prediction, IEEE Trans. Circuits Syst. I 49 (2) (2002) 170–180. Kuo-Hsiang Cheng was born in Taipei, Taiwan, R.O.C., in 1978. He received B.S. degree in automatic control engineering from the Feng Chia University and the Ph.D. degree in electrical engineering from the Chang Gung University, Taiwan, in 2000 and 2006, respectively. He is now a researcher of the Mechanical and Systems Research Laboratories (MSL), Industrial Technology Research Institute (ITRI), Hsinchu, Taiwan, R.O.C. His research interests include fuzzy logic, neural networks, intelligent systems and control, and intelligent vehicles.