Adaptive feedback linearization control of chaotic systems via recurrent high-order neural networks

Adaptive feedback linearization control of chaotic systems via recurrent high-order neural networks

Information Sciences 176 (2006) 2337–2354 www.elsevier.com/locate/ins Adaptive feedback linearization control of chaotic systems via recurrent high-o...

291KB Sizes 0 Downloads 121 Views

Information Sciences 176 (2006) 2337–2354 www.elsevier.com/locate/ins

Adaptive feedback linearization control of chaotic systems via recurrent high-order neural networks Zhao Lu a, Leang-San Shieh a,*, Guanrong Chen b, Norman P. Coleman c a

b

Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77204-4005, USA Department of Electronic Engineering, City University of Hong Kong, Kowloon, Hong Kong, PR China c US Army Armament Center, Dover, NJ 07801, USA

Received 16 April 2004; received in revised form 8 July 2005; accepted 9 August 2005

Abstract In the realm of nonlinear control, feedback linearization via differential geometric techniques has been a concept of paramount importance. However, the applicability of this approach is quite limited, in the sense that a detailed knowledge of the system nonlinearities is required. In practice, most physical chaotic systems have inherent unknown nonlinearities, making real-time control of such chaotic systems still a very challenging area of research. In this paper, we propose using the recurrent high-order neural network for both identifying and controlling unknown chaotic systems, in which the feedback linearization technique is used in an adaptive manner. The global uniform boundedness of parameter estimation errors and the asymptotic stability of tracking errors are proved by the Lyapunov stability theory and the LaSalle–Yoshizawa theorem. In a systematic way, this method enables stabilization of chaotic motion to either *

Corresponding author. Tel.: +1 713 743 4439; fax: +1 713 743 4444. E-mail address: [email protected] (L.-S. Shieh).

0020-0255/$ - see front matter  2005 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2005.08.002

2338

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

a steady state or a desired trajectory. The effectiveness of the proposed adaptive control method is illustrated with computer simulations of a complex chaotic system.  2005 Elsevier Inc. All rights reserved. Keywords: Adaptive control; Chaotic systems; Feedback linearization; Lyapunov function

1. Introduction In many scientific and engineering fields today, more and more chaotic phenomena are being discovered and studied. Chaos is found to be useful with great potential in many disciplines such as celestial mechanics, high-performance circuit, secure telecommunication, systems biology, and so on [1]. On the other hand, chaos is unwanted when it causes undesired irregularity and instability in nonlinear dynamical systems. In order to improve system performance and avoid undesirable effects like fatigue in practical mechanical systems, it is important to eliminate chaos from such systems. This need to understand and address chaos in nonlinear dynamical systems had resulted in increased research interest in this area. As chaos is nonlinear in nature, with most physical chaotic systems inherently containing unknown nonlinearities, it is imperative to describe such systems by means of nonlinear mathematical models. However, such nonlinear models are inconvenient for control purposes for both theoretical and computational reasons. Hence, these models often have to be linearized by using appropriate exact or approximate linearization techniques [2–4]. Being the most theoretically rigorous method, feedback linearization [5] consists of finding a feedback control law and a state variable transformation (diffeomorphism), such that the closed-loop system model becomes linear in the new coordinate system. The applicability of feedback linearization however, is somewhat limited due to the requirement of detailed knowledge of the system model along with stringent constraints that must be satisfied by the original nonlinear system in order to synthesize the nonlinear controller. In our study, to facilitate the use of the feedback linearization without a prior knowledge of the system nonlinearities, a recurrent high-order neural network (RHONN) is used in modeling the unknown chaotic system. This approach allows the feedback linearization technique to be used in an adaptive way. Inspired by biological neural systems, artificial neural networks (ANN) have presented superb learning, adaptation, classification and function-approximation properties, making their applications in online system identification and closed-loop control promising. ANN can be classified as feedforward neural networks (FNN) and recurrent neural networks (RNN). In FNN, the processing elements are connected in such a way that all signals flow in one direction

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

2339

from input units to output units. In RNN however, there are both feedforward and feedback connections along which signals can propagate in opposite directions. While FNN have been applied to system identification with success, there are a number of disadvantages associated with using such networks. These drawbacks include: a large number of units in the input layer (thus, high susceptibility to external noise and slow learning), stringent requirements on input signals (which must arrive at exactly the correct rate and must be controlled by a clock), and difficulty in obtaining an independent system simulator [6]. Due to their structure, RNN do not suffer from the aforementioned problems, and a RNN with smaller size may be equivalent to a rather complicated FNN architecture. Several training methods for RNN have been developed and are available in the literature. Most of them rely on the gradient-based learning and involve the computation of partial derivatives or sensitivity functions. In this respect, they are extensions of the back-propagation algorithm for FNN. Examples of such learning algorithms include the recurrent back-propagation, the back-propagation-through-time algorithm, the real-time recurrent learning algorithm, and the dynamic back-propagation algorithm. Although the aforementioned training methods have been successfully used in many empirical studies, they share some fundamental drawbacks. One drawback is the fact that, in general, they rely on some type of approximation for computing the partial derivative. Furthermore, these training methods generally require a great deal of computational time. A third disadvantage is the inability to obtain analytical results concerning the convergence and stability of these schemes. In an attempt to overcome the stability and convergence problems, RHONN were proposed [7], in which the iterative training procedure is avoided in favor of the provable stable adaptation technique—Lyapunov synthesis approach. This approach was previously discovered in the studies of robust adaptive control and is now utilized for tuning the neural network model parameters. These developments have enhanced the understanding of neural online system identification and control in the context of closed-loop dynamical systems by providing a link to the classical adaptive control theory. To date, RHONN have been successfully applied in online aircraft parameter estimation [8] and speed control of DC-motor [9]. During the last decade, various techniques and approaches have been proposed for controlling chaos under different conditions and requirements, which include for instance methods of differential geometric control [3,4], sliding mode control [10,11], backstepping [12], linear state-feedback control [2], and so on. More recently, some adaptive control methods have also been developed for classes of chaotic systems with uncertain parameters, based on the Lyapunov stability theory [13–16]. The main advantage of the existing methods is that the controller is directly constructed in terms of analytic

2340

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

formulas without knowing in advance the values of the unknown parameters. However, in these approaches, it is essential to assume that the system nonlinearities are known a priori, and most of these research works concentrate on controlling single-input chaotic systems with unknown parameters linearly entering the nonlinear dynamical equation. In contrast, this paper employs RHONN for both identification and control of multi-input chaotic systems with unknown dynamics, in which the feedback linearization technique is used in an adaptive manner. The rest of this paper is organized as follows. In the next section, the RHONN and their approximation properties are introduced. Section 3 presents the approach for identifying the chaotic systems via RHONN. In Section 4, the RHONN-based adaptive feedback linearization control scheme for unknown chaotic systems is discussed. The simulation study confirming the validity of the proposed approach is reported in Section 5, with concluding remarks in Section 6. Throughout this paper, kÆk and kÆkF denote the Euclidean vector norm and Frobenius matrix norm, respectively.

2. RHONN and their approximation property RNN models are characterized by a two-way connectivity between neurons. This distinguishes them from FNN, where the output of one unit is connected only to neurons in the next layer. In the simple case, the state history of each neuron is determined by a differential equation in the form of X v_ i ¼ ai vi þ wij y j ; ð1Þ j

where vi is the state of the ith neuron, ai is a constant, wij is the synaptic weight connecting the jth input to the ith neuron, and yj is the jth input to the above neuron. Here, each yj is either an external input or the state of a neuron, which passes through a sigmoidal function S(Æ), i.e., yj = S(vj). The dynamic behaviors and stability properties of the neural network model (1) have been extensively studied by Hopfield [17] and many others e.g. [18,19]. These studies exhibited encouraging results in application areas such as associative memories, but they also revealed limitations inherent in such a simple model. In a recurrent second-order neural network the total input to the neuron is not only a linear combination of the components yj, but also of their products yjyk. One can pursue along this line further to include higher-order interactions represented by triplets yjykyl, quadruplets, etc. forming the RHONN. Consider a RHONN consisting of n neurons and m inputs. The state of each neuron is governed by a differential equation of the form

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

" v_ i ¼ ai vi þ

L X

wik

Y

2341

# d y j jk

;

ð2Þ

j2I k

k¼1

where {I1, I2, . . . , IL} is a collection of L nonordered subsets of {1, 2, . . . , m + n}, ai is a real coefficient, wik are the (adjustable) synaptic weights of the neural network, and djk are nonnegative integers. The order of the network is determined by X max d jk . k

j2I k

The state of the ith neuron is again represented by vi, and y = [y1, y2, . . . , ym+n]T is the vector consisting of inputs to each neuron, defined by 3 2 3 2 y1 Sðv1 Þ 6 . 7 6 . 7 6 .. 7 6 .. 7 7 6 7 6 7 6 7 6 6 y n 7 6 Sðvn Þ 7 7 7; 6 6 ð3Þ y¼6 7¼6 7 6 y nþ1 7 6 u1 7 7 6 7 6 6 .. 7 6 .. 7 4 . 5 4 . 5 um

y mþn

T

where u = [u1, u2, . . . , um] is the external input vector to the network. The function S(Æ) is a monotone increasing, differentiable sigmoidal function of the form SðvÞ ¼ a

1  k; 1 þ ebv

ð4Þ

where a, b are positive real numbers and k is a real number. In the special case with a = b = 1, k = 0, one obtains the logistic function, and by setting a = b = 2, k = 1, one obtains the hyperbolic tangent function, which are the sigmoidal activation functions most commonly used in neural network applications. Introduce the following parameter vector bTi ¼ ½wi1 ; wi2 ; . . . ; wiL ;

ð5Þ

and the input vector of each neuron, " #T Y d Y d Y d j1 j2 jL g¼ yj ; yj ; . . . ; yj ; j2I 1

j2I 2

ð6Þ

j2I L

where bi, g 2 RL. The mathematical description of the local neuronal dynamics can be restated for i = 1, 2, . . . , n as v_ i ¼ ai vi þ bTi g;

ð7Þ

2342

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

where vectors bi represent the adjustable weights of the network, while the coefficients ai are part of the underlying network architecture and are fixed during training. To guarantee that each neuron vi is bounded-input bounded-output (BIBO) stable, each ai is set to be positive. The dynamics of the overall network are described by expressing (7) in a vector form as v_ ¼ Av þ Bgðv; uÞ;

ð8Þ

where v = [v1, v2, . . . , vn]T 2 Rn, B = [b1, b2, . . . , bn]T 2 Rn·L, and since all ai are positive, A = diag{a1, a2, . . . , an} 2 Rn·n is a stable matrix. Consider the following nonlinear system x_ ¼ f ðx; uÞ;

ð9Þ

where x 2 Rn is the system state and u 2 Rm is the input. The following theorem proves that if a sufficiently large number of higher-order connections are allowed in the RHONN model (8), then it is possible to approximate any dynamical system (9) to any degree of accuracy. Theorem 1. Suppose that the nonlinear system (9) and the RHONN (8) are initially in the same state x(0) = v(0), where v is the state of the RHONN. Then, for any e > 0 and any finite T > 0, there exist an integer L and a matrix B* 2 Rn·L such that sup kvðtÞ  xðtÞk < e. t2½0;T 

Proof. Refer to [7,8]. h This theorem is strictly an existence result which does not provide any constructive method for obtaining the correct weights B*. Instead, this problem will be solved in the following section.

3. Identification of chaotic systems via RHONN With a Lyapunov-like synthesis approach, a dynamical equation is first obtained in terms of the error signals, which include both the estimation errors and parameter errors. A certain Lyapunov-like function V is then considered, whose time derivative V_ along the trajectories of the dynamical system equation is made nonpositive by properly designing an adaptive law for the adjustable parameters. The properties of V and V_ are then used to establish the stability properties of the online estimation scheme.

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

2343

The chaotic system to be identified can be described by x_ ¼ f ðxÞ;

ð10Þ

where x 2 Rn. If one assumes that there is no modeling error, then by Theorem 1 there exists an optimal weight matrix B* such that the unknown dynamical system (10) can be modeled by the following dynamical equation x_ ¼ Ax þ B gðxÞ.

ð11Þ

Under the assumption that the state x in (10) is available for measurement, the neural identifier can be chosen as v_ ¼ Av þ BgðxÞ;

xð0Þ ¼ vð0Þ;

ð12Þ

where B is the estimate of unknown weight parameter matrix. Define the weight error matrix and the error state to be H = B  B* 2 Rn·L and e = v  x = [e1, e2, . . . , en]T 2 Rn respectively. Then, the error state equation can be obtained from (11) and (12) as e_ ¼ Ae þ HgðxÞ.

ð13Þ

In order to derive a stable adaptation law for estimating the weight parameters, consider the following Lyapunov function:    1 T 1  T V ðe; HÞ ¼ e e þ tr HH ; ð14Þ 2 c where tr(Æ) is the trace of (Æ) and c P 0 is a design constant, which is equivalent to   1 1 2 2 V ðe; HÞ ¼ kek þ kHkF . 2 c By defining H = [h1, h2, . . . , hn]T and B = [b1, b2, . . . , bn]T, where hi, bi 2 RL, the Lyapunov function V(e, H) can be rewritten as " # n 1 T 1X e eþ hT hi . V ðe; HÞ ¼ ð15Þ 2 c i¼1 i The derivative of V with respect to t is 1 V_ ðtÞ ¼ eT e_ þ c

n X i¼1

hTi h_ i ¼ eT e_ þ

n 1X hT b_ i . c i¼1 i

ð16Þ

2344

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

Substituting (13) into (16) yields 1 V_ ðtÞ ¼ eT Ae þ eT Hgðx; uÞ þ c ¼ eT Ae þ

n X i¼1

n X

hTi b_ i

i¼1

hTi gðx; uÞei þ

n 1X hT b_ i . c i¼1 i

ð17Þ

The last two terms contain hi and are therefore indefinite. The best one can do at this point is to cancel them by selecting the estimated parameters update law as b_ i ¼ cgðxÞei . ð18Þ Since A is a negative definite matrix, one has V_ ðtÞ ¼ eT Ae 6 0;

ð19Þ

where the equality holds only when e = 0. The estimated parameters update law in matrix form can be easily found from (18) as T B_ ¼ cgðxÞeT .

ð20Þ

Theorem 2. Consider the neural identifier given by (12) whose weights are adjusted according to the adaptation law (20), where c is the adaptation rate. Then, under the assumption of no modeling error, the neural identification scheme guarantees the following properties: (i) e and H 2 L1; (ii) lim eðtÞ ! 0. t!1

Proof. It follows from the previous discussion and derivations that V_ is seminegative definite. Then, it follows from the LaSalle–Yoshizawa Theorem [20,21] that the origin is globally asymptotically stable in error state space, and the weight estimation error H is globally uniformly bounded. h Theorem 2 shows that in the case of no modeling error, the state error between the given system and the RHONN model converges to zero globally and asymptotically. The assumption of no modeling error between the physical plant and the neural network model in the RHONN training algorithm (20) is crucial. The modeling error is mainly caused by an insufficient number of higher-order terms in the neural model. In order to accommodate for modeling inaccuracies, which can result in a parameter drift, i.e. the weight parameters drift to infinity and the estimation error diverges, the presence of modeling errors are allowed. The modeling errors are assumed to appear as additive disturbances in the differential equations representing the system.

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

2345

A robust learning algorithm can be obtained by adding a leakage term known as r-modification from adaptive control theory, which prevents the weight values from drifting to infinity ( cgðxÞei ; kbi k 6 Ri ; b_ i ¼ ð21Þ cgðxÞei  rcbi ; kbi k > Ri ; where c is the adaptation rate and r > 0 is a design constant for the leakage term rcbi. The above robust adaptation rule guarantees that e and H remain bounded and furthermore, the energy of the state errors is proportional to the energy of the modeling errors. In the special case where the modeling error is square-integrable, then e converges to zero asymptotically [7].

4. Adaptive feedback linearization control of chaotic systems via RHONN In adaptive tracking control problems, the objective is to design an adaptive state-feedback controller u 2 Rn, to guide the controlled chaotic system state x(t) to track a pre-specified reference signal xr(t). Let the reference model be described by x_ ri ¼ F ri ðxr Þ;

i ¼ 1; 2; . . . ; n;

ð22Þ

where xr = [xr1, xr2, . . . , xrn]T 2 Rn is the reference trajectory, and Fri(Æ), i = 1, 2, . . . , n, are known smooth nonlinear functions. The proposed adaptive control scheme comprises a neural identifier, whose parameters are updated online in such a way that the error between the controlled chaotic system output and the neural identifier output is approximately zero. The controller receives information from the identifier and outputs the necessary signal which forces the controlled chaotic system to perform the pre-specified task. The controlled chaotic system is described by x_ ¼ f ðxÞ þ u;

ð23Þ

where u 2 Rn is the control input vector. According to the Theorem 1, there exists an optimal weight matrix B* such that the unknown chaotic system (23) can be modeled by the following dynamical state equation x_ ¼ Ax þ B gðxÞ þ u.

ð24Þ

The RHONN for modeling the controlled chaotic system (23) is formulated as v_ ¼ Av þ BgðxÞ þ u;

xð0Þ ¼ vð0Þ.

ð25Þ

Define the error e 2 Rn between the neural identifier states and the controlled chaotic system states by

2346

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

e ¼ v  x. Then, from (24) and (25), one obtains the error dynamical equation e_ ¼ Ae þ HgðxÞ;

ð26Þ

where H = B  B* 2 Rn·L. Define the error ec 2 Rn between the identifier state and the reference model state by ec ¼ v  xr .

ð27Þ

Then, the error dynamical equation can be found from (25) and (27) as e_ c ¼ Av þ BgðxÞ þ u  x_ r .

ð28Þ

In an attempt to synthesize the control law by means of the well-developed linear control theory, the system given in (25) is transformed to v_ ¼ Av þ t;

ð29Þ

by introducing a state-feedback law u ¼ t  BgðxÞ;

ð30Þ

where t is the new control input for the linearized system (29). Due to the robustness inherent in optimal control, the control input t can be designed by minimizing the following performance index Z 1n o ½vðtÞ  xr ðtÞT Q½vðtÞ  xr ðtÞ þ tT ðtÞRtðtÞ dt; ð31Þ J¼ 0

with Q P 0 and R > 0. In light of the well-established optimal control theory, the optimal control law is given by [22] as t ¼ Kv þ Exr ; 1

ð32Þ 1

1 T

where K = R P is the feedback gain, E =  R [(A  K) ] Q is the feedforward gain, and P is the positive definite and symmetric solution of the following Riccati equation AT P þ PA  PR1 P þ Q ¼ 0.

ð33Þ

The closed-loop control law can be obtained by substituting (32) into (30), as u ¼ Exr  Kv  BgðxÞ.

ð34Þ

The following result guarantees the closed-loop stability and convergence of the proposed adaptive control scheme. Theorem 3. Consider the neural identifier given by (25) whose weights are adjusted by the following adaptation law T B_ ¼ cgðxÞeT ;

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

2347

where c is the adaptation rate. Under the control law (34), the adaptive control scheme guarantees the following properties in the absence of modeling error: (i) e, ec and H 2 L1; (ii) lim eðtÞ ! 0; lim ec ðtÞ ! 0. t!1

t!1

Proof. Define V 1 ¼ 12 eTc ec and introduce a Lyapunov function   1 1 V ðe; ec ; HÞ ¼ eT e þ eTc ec þ trðHHT Þ ; 2 c

ð35Þ

where tr(Æ) is the trace of (Æ) and c P 0 is a design constant, which can be recast into   1 1 2 2 2 V ðe; ec ; HÞ ¼ kek þ kec k þ kHkF . 2 c Similar to the derivation in Section 3, the derivative of V with respect to time can be obtained by defining H = [ h1, h2, . . . , hn]T and B = [b1, b2, . . . , bn]T, where hi, bi 2 RL. Then, we get n 1X V_ ðtÞ ¼ eT e_ þ hT b_ i þ V_ 1 . ð36Þ c i¼1 i Substituting (26) into (36) yields n 1X V_ ðtÞ ¼ eT ½Ae þ HgðxÞ þ hT b_ i þ V_ 1 . c i¼1 i

ð37Þ

Analogous to the procedure in Section 3, the learning laws can be derived as b_ i ¼ cgðxÞei ;

ð38Þ

leading to V_ ðtÞ ¼ eT Ae þ V_ 1 .

ð39Þ

From linear optimal control theory [22], one obtains V_ 1 ¼ eTc Qec ;

Q < 0.

Given that the matrix A is also negative definite, one has    A 0 e V_ ðtÞ ¼ ð e ec ÞT 6 0; 0 Q ec

ð40Þ

ð41Þ

where the equality holds only when e = ec = 0. This implies that V(t) and therefore e(t) and ec(t) converge to zero asymptotically. The estimated parameters update law in matrix form can be easily found from (38) as

2348

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354 T B_ ¼ cgðxÞeT .

ð42Þ

Thus, it can be concluded from the LaSalle–Yoshizawa Theorem that the weight estimation error H is globally uniformly bounded. h Apparently, it can be inferred from Theorem 3 that the error between the controlled chaotic system state and the reference model state converges to zero asymptotically. As was done previously to accommodate modeling errors, the adaptation law (38) can be improved further by r-modification as follows:  cgðxÞei ; kbi k 6 Ri ; b_ i ¼ cgðxÞei  rcbi ; kbi k > Ri ; where c is the adaptation rate and r>0 is a design constant for the leakage term rcbi.

5. Simulation study In this simulation study, the Chens chaotic system [23] is used to demonstrate the effectiveness of the proposed adaptive control strategy via RHONN. Chens chaotic system is described by 8 > < x_ 1 ¼ aðx2  x1 Þ; x_ 2 ¼ ðc  aÞx1  x1 x3 þ cx2 ; ð43Þ > : x_ 3 ¼ x1 x2  bx3 .

Fig. 1. The deterministic chaotic attractor of Chens system, plotted in the x3–x1–x2 space.

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

2349

50

x03(t)

40

30

20

10

0

-10

x02(t)

-20

x01(t)

-30 0

1

2

3

4

5

6

7

8

9

10

Time (sec.) Fig. 2. The deterministic chaotic time series of Chens system.

With a = 35, b = 3 and c = 28, this system is chaotic as shown by Figs. 1 and 2. It has been widely experienced that this chaotic Chens system is relatively difficult to control as compared to the familiar chaotic Lorenz system and Chuas circuit due to the prominent three-dimensional and complex topological features of its attractor, especially its rapid change in velocity in the x3-direction. The controlled Chens system is written as x_ ¼ f ðxÞ þ u;

ð44Þ

where u 2 R3 is the control input vector. The recurrent second-order neural network for modeling the controlled Chens system is v_ ¼ Av þ BgðxÞ þ u;

ð45Þ

where A = diag{25, 25, 3}, B is the neural network weight parameter matrix to be identified, g = [y2y3, y1y3, y1y2]T, and y i ¼ tanhðTx Þ; T ¼ 10. Here, x and v are the states of the identified chaotic system and the neural identifier respectively, assuming the same initial condition x(0) = v(0) = [15, 5, 20]T. The initial identified parameter matrix and the adaptation rate are taken as B(0) = 0 and c = 5.

2350

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

10

5

0

xr2(t) -5

-10 10 5

xr1(t)

14

0

16

12 10

-5

8 -10

xr3(t)

6

Fig. 3. The desired reference orbit xr(t), plotted in the x3–x1–x2 space.

20

xr 3 (t ) 15

10

xr1 (t )

xr 2 (t )

5

0

-5

-10 0

1

2

3

4

5

Time (sec.) Fig. 4. The deterministic time series of the desired reference orbit xr(t).

6

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

2351

The reference trajectory xr(t) is specified as a closed orbit corresponding to a periodic solution of the unforced Chens equation. Let the parameters of system (43) be a = 45, b = 1.5 and c = 28, resulting in a periodic solution [24]. Starting from the initial state xr(0) = [1.7570, 1.9648, 7.9743]T, the

15

10

5

0

-5

-10

-15 0

1

2

3

4

5

6

Time (sec.) Fig. 5. Tracking performance of the first coordinate (solid line: xr1, dotted line: x1).

15 10 5 0 -5 -10 -15 -20 0

1

2

3

4

5

6

Time (sec.) Fig. 6. Tracking performance of the second coordinate (solid line: xr2, dotted line: x2).

2352

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354 18

16

14

12

10

8

6

0

1

2

3

4

5

6

Time (sec.) Fig. 7. Tracking performance of the third coordinate (solid line: xr3, dotted line: x3).

400 300 200 100 0 -10 0 -20 0 -30 0 -40 0

0

1

2

3

4

5

6

Time (sec.) Fig. 8. The time response of control input u (solid line: u1, dotted line: u2, dashed line: u3).

three-dimensional phase portrait and the time-domain projection of reference xr(t) are shown in Figs. 3 and 4, respectively. The objective here is to find an adaptive control law to guide the chaotic trajectory x(t) to settle on this desired periodic orbit xr(t). The control perfor-

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

2353

mance under the developed adaptive control law given by (34) and (42) is visualized by Figs. 5–7, which shows that the trajectory of the controlled chaotic system approaches the reference orbit with satisfactory performance. Finally, the control signals u are plotted in Fig. 8.

6. Conclusions In this paper, we have developed an adaptive control strategy using the recurrent high-order neural network (RHONN) for identification and control of chaotic systems with unknown nonlinearities. In this design, the feedback linearization technique is used in an adaptive manner. The global uniform boundedness of parameter estimation errors and the asymptotic stability of tracking errors are proved by Lyapunov stability theory and the LaSalle– Yoshizawa theorem. Computer simulation on a three-dimensional chaotic system with unknown nonlinearities was performed, illustrating the effectiveness of the proposed RHONN-based adaptive control method.

Acknowledgement This work was supported by the US Army Research Office under Grant DAAD 19-02-1-0321 and by NASA-JSC under grant NNJ04HF32G.

References [1] J. Stark, K. Hardy, Chaos: useful at last? Science 301 (2003) 1192–1193. [2] S.M. Guo, L.S. Shieh, G. Chen, C.F. Lin, Effective chaotic orbit tracker: a prediction-based digital redesign approach, IEEE Trans. CAS 47 (2000) 1557–1570. [3] C.C. Fuh, P.C. Tung, Controlling chaos using differential geometric method, Phys. Rev. Lett. 75 (1995) 2952–2955. [4] J.A. Gallegos, Nonlinear regulation of a Lorenz system by feedback linearization techniques, Dyn. Contr. 4 (1994) 277–298. [5] A. Isidori, Nonlinear Control Systems, Springer-Verlag, 1995. [6] D.T. Pham, X. Liu, Neural Networks for Identification, Prediction and Control, SpringerVerlag, 1995. [7] E.B. Kosmatopoulos, M.M. Polycarpou, M.A. Christodoulou, P.A. Ioannou, High-order neural network structures for identification of dynamical systems, IEEE Trans. Neural Networks 6 (1995) 422–431. [8] S.M. Amin, V. Gerhart, E.Y. Rodin, System identification via artificial neural networks: applications to on-line aircraft parameter estimation, in: Proc. AIAA/SAE World Aviation Congress, Anaheim, CA, 1997, pp. 1–22. [9] G.A. Rovithakis, M.A. Christodoulou, On using recurrent neural network models to develop direct robust adaptive regulators for unknown systems, Intell. Autom. Soft Comput. 2 (1997) 321–338.

2354

Z. Lu et al. / Information Sciences 176 (2006) 2337–2354

[10] Z. Lu, L.S. Shieh, G. Chen, N.P. Coleman, Simplex sliding mode control for nonlinear uncertain systems via chaos optimization, Chaos, Solitons Fract. 23 (2005) 747–755. [11] H.T. Yau, C.K. Chen, C.L. Chen, Sliding mode control of chaotic systems with uncertainties, Int. J. Bifurcat. Chaos 10 (2000) 1139–1147. [12] S. Mascolo, G. Grass, Controlling chaotic dynamics using backstepping design with application to the Lorenz system and Chuas circuit, Int. J. Bifurcat. Chaos 9 (1999) 1425– 1434. [13] S.S. Ge, C. Wang, T.H. Lee, Adaptive backstepping control of a class of chaotic systems, Int. J. Bifurcat. Chaos 10 (2000) 1149–1156. [14] J.H. Lu¨, S.C. Zhang, Controlling Chens chaotic attractor using backstepping design based on parameters identification, Phys. Lett. A 286 (2001) 148–152. [15] T. Yang, C.M. Yang, L.B. Yang, Detailed study of adaptive control of chaotic systems with unknown parameters, Dyn. Contr. 8 (1998) 255–267. [16] S.H. Chen, J.H. Lu¨, Parameters identification and synchronization of chaotic systems based upon adaptive control, Phys. Lett. A 299 (2002) 353–358. [17] J.J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neuron, Proc. Nat. Acad. Sci. 81 (1984) 3088–3092. [18] M.A. Cohen, S. Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Trans. Sys. Man Cyber. 13 (1983) 815– 826. [19] Y. Kamp, M. Hasler, Recursive Neural Networks for Associative Memory, Wiley, New York, 1990. [20] J.P. LaSalle, Stability theory for ordinary differential equations, J. Diff. Eqs. 4 (1968) 57–65. [21] T. Yoshizawa, Stability Theory by Lyapunovs Second Method, The Mathematical Society of Japan, 1966. [22] F.L. Lewis, V.L. Syrmos, Optimal Control, John Wiley & Sons, 1995. [23] G. Chen, T. Ueta, Yet another chaotic attractor, Int. J. Bifurcat. Chaos 9 (1999) 1465–1466. [24] T. Ueta, G. Chen, Bifurcation analysis of Chens equation, Int. J. Bifurcat. Chaos 10 (2000) 1917–1931.