Online SVM regression algorithm-based adaptive inverse control

Online SVM regression algorithm-based adaptive inverse control

ARTICLE IN PRESS Neurocomputing 70 (2007) 952–959 www.elsevier.com/locate/neucom Online SVM regression algorithm-based adaptive inverse control Hui ...

314KB Sizes 0 Downloads 30 Views

ARTICLE IN PRESS

Neurocomputing 70 (2007) 952–959 www.elsevier.com/locate/neucom

Online SVM regression algorithm-based adaptive inverse control Hui Wang, Daoying Pi, Youxian Sun The National Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, PR China Available online 15 October 2006

Abstract An adaptive inverse control algorithm is proposed by combining fast online support vector machine regression (SVR) algorithm with straight inverse control algorithm. Because training speed of standard online SVR algorithm is very slow, a kernel cache-based method is developed to accelerate the standard algorithm and a new fast online SVR algorithm is obtained. Then the new algorithm is applied in straight inverse control for constructing the inverse model of controlled system online, and output errors of the system are used to control online SVR algorithm, which made the whole control system a closed-loop one. Simulation results show that the new algorithm has good control performance. r 2006 Elsevier B.V. All rights reserved. Keywords: Adaptive inverse control algorithm; Kernel cache; Closed-loop system

1. Introduction Adaptive inverse control was named and proposed by Professor Widrow in 1986 [18]. It is a novel method in control system design. In adaptive inverse control algorithm, the inverse model of controlled system is used as serial controller to control dynamic characteristics of the system in an open-loop way, and so instable phenomenon of the system aroused possibly by feedback can be avoided [4]. Compared with standard inverse control, adaptive inverse control can automatically track variance of system model and restrain dynamic noise. Up to now, methods used in adaptive inverse control of linear system are mostly based on linear adaptive filters, and methods used in adaptive inverse control of nonlinear system are mostly based on neural networks [17]. The key of adaptive inverse control is how to construct inverse model of controlled system accurately. Because neural network can concurrently process complex nonlinear issue, it is suitable for constructing model of nonlinear system. So neural network modeling is often used to construct inverse model in the adaptive inverse control of nonlinear systems [6,10]. But neural network has some drawbacks such as slow learning speed, weak Corresponding author.

E-mail address: [email protected] (H. Wang). 0925-2312/$ - see front matter r 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2006.10.021

generalization ability, etc. And during its training process, it has risk of running into local minimum point. Also the structure of neural network must be designed before training it and using it. Support vector machine (SVM) is a new and valid machine-learning algorithm [13] and has been well used for classification, function regression, and time series prediction, etc. [2,8,11,12]. Compared with neural network, SVM has good generalization ability, and is especially suitable for machine learning in small sample condition. The training algorithm of SVM will not run into local minimum point. Also it can automatically construct the structure of system model. Currently most support vector machine regression (SVR) training algorithms are offline, but online SVR algorithm is more useful when the system to be identified is time-variant, because this kind of algorithms can automatically track changes of system model with time-varying and time-lagging characteristics. Online SVR algorithms have been studied by many researchers [1,7,6,14], but still have some drawbacks. First, these algorithms will be invalid when margin support vector set is empty. In this paper a method to deal with this special case is developed. Secondly, training speed of these online SVR algorithms is very slow, and so they are not suitable for real-time control applications. Analysis results show that most training time is spent on computing of data about kernel function. So a

ARTICLE IN PRESS H. Wang et al. / Neurocomputing 70 (2007) 952–959

kernel cache-based method is developed to accelerate the training process of the algorithms. By combining the two improvements above with regular online SVR algorithm, we get a new fast online SVR algorithm, which is used in adaptive inverse control to construct inverse model of controlled system. And we show that adaptive inverse control based on the fast SVR algorithm can get good performance when being used to control nonlinear systems with time-varying and time-lagging characteristics. The paper is organized as follows: In Section 2 the improved fast online SVR algorithm is introduced in detail. In Section 3 system’s reversibility is analyzed and then the algorithm is applied in adaptive inverse control to construct a new SVR algorithm-based adaptive inverse control. Two simulation results are given to show the validity of the algorithm in Section 4. In Sections 5 and 6, comparisons and conclusions are put forward.

2.1. Basic online SVR algorithm Basic online SVR algorithm can be described as follows. Given a training sample set, T ¼ {xi, yi, i ¼ 1yl}, where xiARN and yiAR, we can construct a linear regression function: l X

ðai þ ani ÞKðxi ; xÞ þ b.

(1)

i¼1

W and b can be obtained by solving the following optimization problem:

þ

ðai þ ani Þ 

i¼1

l X

E: i=-C

Fig. 1. Three subsets gotten from training samples set.

in Fig. 1):     Set E : Error support vectors; E ¼ ijjyi j ¼ C; hðxi ÞX ;     Set S : Margin support vectors; S ¼ ij0ojyi joC; hðxi Þ ¼  ;     Set R : Remaining samples; R ¼ ijjyi j ¼ 0; hðxi Þp :

(4)

yi ðai  ani Þ,

l X ðai  ani Þ ¼ 0,

hðxc Þ ¼ f ðxc Þ  yc ¼

l X

Kðxc ; xj Þyj þ b  yc .

ð5Þ

j¼1

Then we gradually change the value of yc and at the same time make all other samples satisfy KKT condition. The change of yc may change the values of yi and h(xi) of other samples. It satisfies the formula:

yc þ

l X

Kðxi ; xj Þ þ Db.

(6)

l X

yi ¼ 0.

(7)

i¼1

ð2Þ

i¼1

where ai and ai are Lagrange multipliers, K(xi, xj) is kernel function. Then we define coefficient difference yi and margin function h(xi) as n

yi ¼ ai  ai , Kðxi ; xj Þyj þ b  yi .

yc ¼ 0,

From Eq. (3), the relation between yc and yi is

i¼1

l X

R: i =0

j¼1

s:t: 0pai ; ani pC; i ¼ 1 . . . l,

hðxi Þ ¼ f ðxi Þ  yi ¼

S

Dhðxi Þ ¼ Kðxi ; xc ÞDyc þ

l X l 1X W ¼ Kðxi ; xj Þðai  ani Þðaj  anj Þ min 2 i¼1 j¼1 ai ;ani ;b l X

E: i = C S

When we add a new sample xc, our goal is to let xc enter one of the three sets, while KKT condition still can be satisfied automatically. At first, we set

2. Fast online SVR algorithm

f ðxÞ ¼

953

ð3Þ

j¼1

According to Lagrange Multiplier Method and Karush– Kuhn–Tucker (KKT) condition, we can separate training samples set into three subsets [5] (as shown

From these formulae, we can obtain the relation among Dyc, Dyi and b: 3 2 Db 6 DyS 7 6 1 7 7 6 (8) 6 .. 7 ¼ b Dyc , 6 . 7 5 4 DySl

S

where ySi 2 S, lS is sample number of set S, and 3 2 b 3 2 1 6 bS 7 6 KðxS ; xc Þ 7 7 6 1 7 6 1 7 7 6 6 b ¼ 6 .. 7 ¼ R6 .. 7, 7 6 . 7 6 . 5 5 4 4 bSl

S

KðxSl ; xc Þ S

(9)

ARTICLE IN PRESS H. Wang et al. / Neurocomputing 70 (2007) 952–959

954

where 2

0 6 61 6 R¼6 6 .. 6. 4 1

1



KðxS1 ; xS1 Þ



.. .

..

KðxSl ; xS1 Þ S

. 

update yi and h(xi) according to the formulae (8) and (11). Following method is proposed to deal with this special case. In this special case, according to Eq. (6), we get

31

1 7 KðxS1 ; xSl Þ 7 S 7 7 . .. 7 . 7 5 KðxSl ; xSl Þ S

(10)

S

Also we can obtain the relation among Dyc and Dh(xi): 3 2 Dhðxn1 Þ 6 Dhðx Þ 7 n2 7 6 7 6 (11) 7 ¼ gDyc , 6 .. 7 6 . 5 4 Dhðxnln Þ where xni 2 E [ R [ c, R[E[c, and 2 3 21 Kðxn1 ; xc Þ 6 Kðx ; x Þ 7 6 61 n2 c 7 6 6 7 6 g ¼6 7þ6 .. .. 6 7 6 . . 4 5 6 4 Kðxnl n ; xc Þ 1

ln is the sample number of set Kðxn1 ; xS1 Þ



Kðxn2 ; xS1 Þ



.. .

..

Kðxnl n ; xS1 Þ

. 

Kðxn1 ; xSl Þ

3

S

7 Kðxn2 ; xSl Þ 7 S 7 7b. .. 7 7 . 5 Kðxnln ; xSl Þ S

(12) Matrix R can be updated with a recursive algorithm according to following formula when a new sample xi is added to set S: 2 3 0 .. 7 6  h i R 6 b . 7 T 7þ 1 b 1 R¼6 , (13) 6 07 4 5 gi 1 0

0



0

where 2

3

1

6 KðxS ; xi Þ 7 7 6 1 7 6 b ¼ R6 .. 7, 7 6 . 5 4 KðxSl ; xi Þ

(14)

Dhðxi Þ ¼ Db. So we can get relation between Dh(xi) and Db according to following formula: 3 2 3 2 Dhðxn1 Þ 1 6 Dhðx Þ 7 6 7 n2 7 6 17 7 6 6 (17) 7¼6 6 .. . 7Db, 6 7 4 .. 7 6 . 5 5 4 Dhðxnl n Þ 1 where xni 2 E [ R [ c and ln is the sample number of set E [ R [ c. Then we can gradually update Db until one sample xi changes from one set to another. And then basic SVR algorithm can be used again. 2.3. Kernel cache method for accelerating SVR algorithm After analyzing the standard SVR algorithm, we find that most training time is spent on computing of data about kernel function. Because kernel function is a nonlinear function, it evidently increases calculation amount of the SVR algorithm. Further analyzing result shows that these data about kernel function are not all updated in every step of the SVR algorithm. Those updated data are just a small part of the whole data. So we can use a cache to save those data about kernel function. To accelerate the speed of the algorithm, we can add a cache to keep some kernel function matrixes, K(xn, xE), K(xn, xR), K(xn, xS), and K(xn, xc): 3 2 1 Kðxn1 ; xG1 Þ    Kðxn1 ; xGl Þ G 7 6 6 1 Kðxn2 ; xG1 Þ    Kðxn2 ; xGl Þ 7 G 7 6 7, (18) Kðxn ; xG Þ ¼ 6 .. .. .. 7 6 .. 7 6. . . . 5 4 1 Kðxnln ; xG1 Þ    Kðxnln ; xGl Þ G

S

h

gi ¼ Kðxi ; xi Þ þ 1 KðxS1 ; xi Þ



i

KðxSl ; xi Þ . S

(15)

When a sample xi is removed from R, matrix R is updated as Ri;j ¼ Ri;j 

Ri;k Rk;j , Rk;k

3 Kðxn1 ; xc Þ 6 Kðx ; x Þ 7 n2 c 7 6 7 6 Kðxn ; xc Þ ¼ 6 7, .. 7 6 . 5 4 Kðxnln ; xc Þ 2

(16)

where i; j 2 ½1;    k; k þ 2;    S l S þ 1. 2.2. Method for dealing with the special case that set S is empty Above SVR algorithm does not work when set S is empty, because in this case yc keeps constant, so we cannot

(19)

where G is one of E, S, and R sets, and xGi 2 G, i ¼ 1, 2, y, lG, lG is the sample number of set G, xni 2 E [ R [ S [ c, i ¼ 1, 2, y, ln, and ln is the sample number of set E [ R [ S [ c. Then b, c can be expressed as b ¼ RðKðxc ; xS ÞÞT ; c ¼ Kðxnerc ; xc Þ þ Kðxnerc ; xS Þb;

(20)

ARTICLE IN PRESS H. Wang et al. / Neurocomputing 70 (2007) 952–959

where xnerci 2 E [ R [ c; i ¼ 1, 2, y, lnerc, lnerc is the sample number of set E[R[c, K(xc, xS) can be gotten from K(xn, xS), K(xnerc, xc) can be gotten from K(xn, xc), and K(xnerc, xS) can be gotten from K(xn, xS).The following results are used to process data in kernel cache:

(1) Whenever adding a new sample xc, we can compute K(xn, xc) according to Eq. (19) and update K(xn, xE), K(xn, xR), K(xn, xS) as " # Kðxn ; xG Þ Kðxn ; xG Þ ¼ 1 Kðxc ; xG Þ    Kðxc ; xG Þ , 1 l G

(21) where G is one of E, S, and R sets,and lG is sample number of set G. (2) When xc enters into set G, then   Kðxn ; xG Þ ¼ Kðxn ; xG Þ Kðxn ; xc Þ , (22) where G is one of E, S, and R sets. (3) When xc is removed out of training set, K(xn, xE), K(xn, xR), K(xn, xS) can be updated as 3 2 1 Kðxn1 ; xGi Þ    Kðxn1 ; xGl Þ G 7 6 7 6 .. .. .. .. 7 6. . . . 7 6 7 6 6 1 Kðxc1 ; xG1 Þ    Kðxc1 ; xGl Þ 7 G 7 6 Kðxn ; xG Þ ¼ 6 7, 6 1 Kðxcþ1 ; xG1 Þ    Kðxcþ1 ; xGlG Þ 7 7 6 7 6. .. .. 7 6. .  . 7 6. 5 4 1 Kðxnl n ; xG1 Þ    Kðxnln ; xGl Þ G

(23) where G is one of E, S, and R sets, and lG is sample number of set G. (4) When xG1i is updated from set G1 into set G2, then remove Kðxn ; xG1i Þ from Kðxn ; xG1 Þ and add Kðxn ; xG1i Þ into Kðxn ; xG2 Þ. There are four possible cases to be treated: case 1 , G1 ¼ R, G2 ¼ S; case 2, G1 ¼ E, G2 ¼ S; case 3, G1 ¼ S, G2 ¼ R; case 4, G1 ¼ S, G2 ¼ E. And in each case Kðxn ; xG1 Þ, Kðxn ; xG2 Þ and Kðxn ; xG1i Þ are changed according to following formulae, respectively: 2

1

6 6 Kðxn1 ; xG11 Þ 6 6 6 .. 6 . 6 6 6 Kðxn ; xG1 Þ Kðxn ; xG1 Þ ¼6 1 i1 6 6 Kðxn ; xG1 Þ 1 iþ1 6 6 6 . 6 .. 6 4 Kðxn1 ; xG1l Þ

h

G1

Kðxn ; xG2 Þ ¼ Kðxn ; xG2 Þ

1



Kðxn2 ; xG11 Þ



.. .



Kðxn2 ; xG1i1 Þ



Kðxn2 ; xG1iþ1 Þ



.. .

.. .

Kðxn2 ; xG1l Þ    G1

i Kðxn ; xG1i Þ ,

3T

1

7 Kðxnl n ; xG11 Þ 7 7 7 7 .. 7 . 7 7 Kðxnl n ; xG1i1 Þ 7 7 , 7 Kðxnl n ; xG1iþ1 Þ 7 7 7 7 .. 7 . 7 5 Kðxnl n ; xG1l Þ G1

955

2

Kðxn1 ; xG1i Þ

3

7 6 6 Kðxn2 ; xG1i Þ 7 7 6 Kðxn ; xG1i Þ ¼ 6 7. .. 7 6 7 6 . 5 4 Kðxnl n ; xG1i Þ Experiment results show that running time of the SVR algorithm with kernel cache decreases from dozens of minutes to dozens of seconds. 2.4. Fast online SVR algorithm with kernel cache Online SVM regression training algorithm is consisted of two sub-algorithms: incremental algorithm and decremental algorithm. The main idea of incremental algorithm is that: when a sample xc is added to training sample set, we gradually change its yc and h(xc) until xc enter into of one of three sets, and during the process some other samples are also updated from one of sets S, R, and E to another because they are influenced by the change of sample xc. The goal of the incremental algorithm is to make all samples satisfy the KKT condition while a new sample is added to training sample set. The detailed process of incremental algorithm is as following: 1. Compute K(xn, xE), K(xn, xR), K(xn, xS), K(xn, xc) according to Eqs. (19) and (21), set yc ¼ 0. 2. If |h(xc)|pe, assign xc to set R, update K(xn, xR) according to (22), terminate. 3. Increase or decrease yc according to the sign of h(xc), update b, yi, iAS according to Eq. (8), and update h(xi), iAE[R[c according to Eq. (11), until xc enters into set S or E; if set S is empty, update b according to Eq. (17) until some sample enter into set S:  If h(xc) changes from |h(xc)|oe to h(xc) ¼ e, add xc to set S, update matrix R, compute K(xn, xS) according to Eq. (22), terminate;  If yc increases from |yc|oC to |yc| ¼ C, then add xc to set E, compute K(xn, xE) according to (22), terminate;  For each sample xi in set S: If yI changes from 0o|yi|oC to |yi| ¼ C, move sample xi from set S to set E, compute K(xn, xS), K(xn, xE) according to Eq. (24), update matrix R; If yi changes from 0o|yi|oC to yi ¼ 0, move sample xi from set S to set R, compute K(xn, xS), K(xn, xR) according to Eq. (24), update matrix R.  For each sample xi in set E: If h(xi) changes from |h(xi)|4e to |h(xi)| ¼ e, move sample xi from set E to set S, compute K(xn, xS), K(xn, xE) according to Eq. (24), update matrix R.  For each sample xi in set R: If h(xi) changes from |h(xi)|oe to |h(xi)| ¼ e, move sample xi from set R to set S, compute

ARTICLE IN PRESS 956

H. Wang et al. / Neurocomputing 70 (2007) 952–959

K(xn, xS), K(xn, xR) according to Eq. (24), update matrix R. 4. Repeat step 3. The main idea of decremental algorithm is that: a sample with zero value of yc can be removed safely since it has no influence to SVM model; when a sample xc is to be removed from training sample set, we gradually change its yc and h(xc) until yc change to zero, and during the process some other samples are also updated from one of sets S, R, and E to another because they are influenced by the change of sample xc.The goal of the decremental algorithm is to make all samples satisfy the KKT condition while a useless sample is removed out of training sample set. The detailed process of decremental algorithm is as following: 1. If xcAR, remove xc out of training sample set, update K(xn, xE), K(xn, xR), K(xn, xS) according to Eqs. (23) and (24), terminate. 2. If xcAE, remove it out of set E, update K(xn, xE) according to Eq. (24). 3. If xcAS, remove it out of set S, update K(xn, xS) according to Eq. (24), update matrix R. 4. Increase or decrease yc according to the sign of h(xc), update b, yi, iAS according to Eq. (8), and update h(xi), iAE [ R [ c according to Eq. (11), until xc enters into set R; if set S is empty, update b according to Eq. (17) until some samples enter into set S:  If yc ¼ 0, remove xc out of training sample set, update K(xn, xS), K(xn, xR) and K(xn, xE) according to Eqs. (23) and (24), terminate.  For each sample xi in set S: If yi changes from 0o|yi|oC to |yi| ¼ C, move sample xi from set S to set E, compute K(xn, xS) and K(xn, xE) according to Eq. (24), update matrix R; If yi changes from 0o|yi|oC to yi ¼ 0, move sample xi from set S to set R, compute K(xn, xS), K(xn, xR) according to Eq. (24), update matrix R.  For each sample xi in set E: If h(xi) changes from |h(xi)|4e to |h(xi)| ¼ e, move sample xi from set E to set S, compute K(xn, xS) and K(xn, xE) according to Eq. (24), update matrix R.  For each sample xi in set R: If h(xi) changes from |h(xi)|4e to |h(xi)| ¼ e, move sample xi from set R to set S, compute K(xn, xS), K(xn, xR) according to Eq. (24), update matrix R. 5. Repeat step 4.

y2 ¼  y1 , y þ y2 . b¼ 1 2

ð25Þ

The sets E, S, and R are initialized from these two points based on Eq. (4). If set S is nonempty, the matrix R can be initialized from Eq. (10); as long as set S is empty, the matrix R will not be used. 3. Fast online SVR algorithm-based adaptive inverse control The system structure of adaptive inverse control is shown in Fig. 2. Controller is straightly connected in series to controlled system, and feedback is used to control the SVR algorithm to improve control performance and track system variance. In Fig. 2, ysp is reference input, u is controller output, y is system output, C is controller, and P is controlled plant. According to system model (2), its inverse model can be defined as uðkÞ ¼ f ½yðk þ 1Þ; yðkÞ; . . . ; yðk  nÞ; uðk  1Þ; . . . ; uðk  mÞ. The training set of SVM can be constructed as following: D ¼ fX i ; Y i g; i ¼ 1; 2; . . . ; l, Y i ¼ uðkÞ, X i ¼ ½yðk þ 1Þ; yðkÞ;    ; yðk  nÞ; uðk  1Þ;    ; uðk  mÞ. ð26Þ The control algorithm can be separated into two substages: online identification stage and control stage. 1. Construct a new SVM training set according to Eq. (26). 2. Use incremental algorithm of online SVR algorithm to train system model. 3. If number of training samples is out of threshold max_sp, use decremental algorithm to remove redundant samples. 4. Compute e ¼ ysp-y, if e is less than a threshold min_error, end identification process, and go to step 5, else return to step 1.

a u

y P

C e ysp

-

+

identifying model

2.5. Initialization of the algorithm An efficient starting point is the two-sample solution. Given a training set, T ¼ {(x1, y1), (x2, y2)}, with y1Xy2, the solution of Eq. (2) is

y1  y2  2 y1 ¼ max 0; min C; , 2ðKðx1 ; x1 Þ  Kðx1 ; x2 ÞÞ

b ysp

u

C

y P

e +

-

control model Fig. 2. The structure of adaptive inverse model.

ARTICLE IN PRESS H. Wang et al. / Neurocomputing 70 (2007) 952–959

5. Get u by using the SVR-based inverse model to control output y according to reference input ysp. 6. Return to step 4.

4.1. Simulation on linear system The plant to be identified and controlled is described by following difference equation [6]: yðk þ 1Þ ¼ 0:7yðkÞ þ 1:25uðkÞ þ 0:75uðk  1Þ.

(27)

In the model, plant output at time k+1 depends both on its past value y(k) as well as past input values u(ki) (i ¼ 0, 1).The inverse model of above plant can be se1ected by following equation:

0.5 control u and output of SVM model

4. Simulations

957

control u output of SVM model

0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0

20

40

60

80

uðkÞ ¼ f ðyðk þ 1Þ; yðkÞ; uðk  1ÞÞ. Assuming that the system structure is unknown, according to Eq. (26), training samples can be constructed as X i ¼ ½yðk þ 1Þ; yðkÞ; yðk  1Þ; uðk  1Þ; uðk  2Þ. The SVR algorithm parameters are set as: C ¼ 5, e ¼ 0.001, and RBF kernel function is used here. Training process is shown in Fig. 3(a), solid line is control input u, and broken line is output of SVM model. Control process is shown in Fig. 3(b), solid line is reference input ysp, and broken line is system output y. It can be seen from Fig. 3 that the SVM-based inverse model can correspond to the real plant well, and the plant output can quickly track the reference input with little overshoot. Training process of 200 samples (shown in Fig. 3(a)) takes 28.563 s. Define e1 as error between predicted output of SVM model and system output (at identification stage) and e2 as error between reference input and system output (at control stage), then we get vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 200 u 1 X e1 ¼ t ðy ðiÞ  yðiÞÞ2 ¼ 0:0019, 200 i¼1 r vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 200 u 1 X ð28Þ e2 ¼ t ðy ðiÞ  yðiÞÞ2 ¼ 0:0252. 200 i¼1 m

3 reference input and system output

Y i ¼ uðkÞ,

100 120 140 160 180 200 k

system output reference output

2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 1.2 1

0

20

40

60

80

100 120 140 160 180 200 k

Fig. 3. Training process (a) and control process (b) of linear plant.

be se1ected by the equation: uðkÞ ¼ f ðyðk þ 1Þ; yðkÞ; uðk  1ÞÞ. Assuming that the system structure is unknown, according to (26), training sample can be constructed as: Y ¼ uðkÞ, X ¼ ½yðk þ 1Þ; yðkÞ; yðk  1Þ; uðk  1Þ; uðk  2Þ.

4.2. Simulation on nonlinear system The plant to be identified and controlled is described by the following difference equation [9]: yðk þ 1Þ ¼ 6yðkÞ=ð1 þ yðkÞ2 Þ þ uðkÞ þ 0:3uðk  1Þ.

(29)

In the model, plant output at time k+1 depends both on its past value y(k) as well as past input values u(ki) (i ¼ 0, 1). The nonlinear dependence of y(k+1) on y(k) and u(ki) (i ¼ 0, 1) is assumed to be separable. The inverse model can

The parameters and kernel function are the same as in experiment one.Training process is shown in Fig. 4(a), solid line is control input u, and broken line is output of SVM model. Control process is shown in Fig. 4(b), solid line is reference input ysp, and broken line is system output y. It can be seen from Fig. 4 that the SVM-based inverse nonlinear model can correspond to the real nonlinear plant well, and the nonlinear plant output can quickly track the reference input with little overshoot.

ARTICLE IN PRESS H. Wang et al. / Neurocomputing 70 (2007) 952–959

958

control u and output of SVM model

0.5

control u output of SVM model

0.45

identification with offline SVR algorithm and neural network (NN)-based system identification. 5.1. Usability

0.4 0.35

Online and offline SVR algorithms are all easy to use. They have uniform structure of model, and only have several adjustable parameters. While the structure of NN must be selected, including the tier number of NN, inner node number of each tier, and initial values of weight coefficients. If bad parameters are selected, it will lead to a NN model with bad performance or very long training time. So SVR algorithms can be more easily used to construct system model than NN.

0.3 0.25 0.2 0.15 0.1 0.05 0 0

20

40

60

80

100 120 140 160 180 200 k

reference input and system output

3 system output reference output

2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 1.2 1

0

20

40

60

80

100 120 140 160 180 200 k

Fig. 4. Training process (a) and control process (b) of nonlinear model.

5.2. Speed Speed of online SVR algorithms is faster than that of offline one and NN training algorithm. In MATLAB simulation experiments, training processes of online SVR algorithm as shown in Figs. 3(a) and 4(a) take less than 0.5 min. Using SVM MATLAB Toolbox (written by Steve R. Gunn, downloaded from http://www.kernel-machine.org/) [3] to simulate offline SVR algorithm, the process of 200 training samples takes about 1 min (the kernel algorithm of the toolbox is written in c code, while our algorithm is written in MATLAB code). A simulation experiment is done with neural network to identify nonlinear system (29). In the experiment also MATLAB NN toolbox [16] is used to construct a three tiers BP network, based on logsig and purelin function and Levenberg–Marquardt algorithm. The training process of BP network takes one half hour if good initial parameters are selected and even more time if bad initial parameters are selected [19]. 5.3. Adaptivity

Training process of 200 samples (shown in Fig. 4(a)) takes 27.344 s. And the errors e1 and e2 in this case are e1 ¼ 0:0018; e2 ¼ 0:0133. From these data about errors and training time, we can see that online SVR algorithm can approximate quickly the system model with very high precision and system output can follow reference input quickly. The online SVR algorithm-based adaptive inverse control has very good control performance.

5. Comparison The new proposed adaptive inverse control is easy to use and has good real-time processing ability and adaptivity, which come from the adoption of online SVR algorithm. Here we compare online SVR algorithm-based system

Online SVR algorithm can automatically track variance of identified plant model, so it can be used to construct inverse model of time-varying system. Offline SVR algorithm cannot be used to construct model of time-varying system. Currently NN only has a few online training algorithms, and their performance is not very well. 6. Conclusion A fast online SVR algorithm-based adaptive inverse control is developed in this paper, we show that it is easy to use; with kernel cache-based method it has good real-time ability; it can control time-varying system with good performance. Future work is to analyze the control performance quantitatively, and reduce the memory requirement of the algorithm.

ARTICLE IN PRESS H. Wang et al. / Neurocomputing 70 (2007) 952–959

Acknowledgments This work is sponsored by the 973 program of China under Grant no. 2002CB312200 and the National Natural Science Foundation of China under Grant nos. 60574019 and 60474045. References [1] G. Cauwenberghs, et al., Incremental and decremental support vector machine learning, in: Fourteenth conference on Advances in Neural Infomation Processing Systems, NIPS, 2001, pp. 409–423. [2] P.M.L. Drezet, R.F. Harrison, Support vector machines for system identification, in: UKACC International Conference on CONTROL’98, UK, 1998, pp. 668–692. [3] S.R. Gunn, Support vector machines for classification and regression, Technical Report, Image Speech and Intelligent Systems Research Group, University of Southampton, 1997. [4] X. Liu, D.J. Zhang, J. Wu, Survey of adaptive inverse control, Electr. Autom. 25 (6) (2003) 5–8. [5] J.S. Ma, T. James, P. Simon, Accurate on-line support vector regression, Neural Comput. 15 (11) (2003) 2683–2704. [6] X.M. Ma, Inverse identification and closed-loop control of dynamic systems using neural networks, Control Theory Appl. 14 (6) (1997) 829–836. [7] M. Martin, On-line support vector machines for function approximation, Technical Report LSI-02-11-R, Software Department, Universitat Politecnica de Catalunya, Spain, 2002. [8] K.R. Mu¨ller, A.J. Smola, Predicting time series with support vector machines, in: Proceedings of ICANN’97, Lecture Notes in Computer Science, vol. 1327, Springer, Berlin, 1997. pp. 999–1004. [9] K.S. Narendra, Identification and control of dynamical systems using neural networks, IEEE Trans. Neural Networks 1 (1) (1990) 4–27.

959

[10] S.N. Singh, W. Yirn, W.R. Wells, Direct adaptive and neural control of wing-rock motion slender delta wings, J. Guidance Control Dynam. 18 (1) (1995) 25–30. [11] A.J. Smola, B. Scho¨lkopf, A tutorial on support vector regression, Neurocolt Technical Report, Royal Holloway College, University of London, 1998. [12] J.A.K. Suykens, Nonlinear modeling and support vector machines, In: IEEE Instrumentation and Measurement Technology Conference, Budapest, Hungary, 2001, pp. 287–294. [13] V.N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, USA, 1995. [14] D.C. Wang, et al., Support vector machines regression on-line modeling and its application, Control Decision 18 (1) (2003) 89–91,95. [16] X. Wen, L. Zhou, D.L. Wang, X.Y. Xiong, Application and Designation of MATLAB Neural Network, Science Press, Beijing, 2000. [17] B. Widrow, G.L. Plett, Adaptive inverse control, in: Proceeedings of the 1993 International Symposium on Intelligent Control, Chicago, 1993, pp. 1–6. [18] B. Widrow, E. Walach, Adaptive inverse control, Control Eng. Pract. 5 (1) (1997) 146–147. [19] W.M. Zhong, D.Y. Pi, Y.X. Sun, SVM based direct inverse-model identification, Control Theory Appl. 22 (2) (2005) 307–310. Hui Wang received his B.S. and M.S. degrees in control science from Hefei University of Technology, China in 2000 and 2003, respectively. His current research interests include machine learning, modelling and control of complex industrial processes, etc.