Synchronization of Ghostburster neurons under external electrical stimulation via adaptive neural network H∞ control

Synchronization of Ghostburster neurons under external electrical stimulation via adaptive neural network H∞ control

Neurocomputing 74 (2010) 230–238 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Synchron...

1MB Sizes 1 Downloads 46 Views

Neurocomputing 74 (2010) 230–238

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Synchronization of Ghostburster neurons under external electrical stimulation via adaptive neural network HN control H.Y. Li a, Y.K. Wong b,, W.L. Chan b, K.M. Tsang b a b

Automation and Electrical Engineering College, Tianjin University of Technology and Education, Tianjin 3002222, PR China Department of Electrical Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong

a r t i c l e in f o

a b s t r a c t

Article history: Received 12 June 2009 Received in revised form 25 February 2010 Accepted 2 March 2010 Communicated by D. Wang Available online 31 March 2010

In this paper, an adaptive neural network HN control is proposed to realize the synchronization of two Ghostburster neurons under external electrical stimulation. We first analyze the periodic and chaotic dynamics of individual Ghostburster neuron under different external electrical stimulations, then design a HN controller via adaptive neural networks to synchronize two Ghostburster neurons and drive the slave neuron to act as the master one. Asymptotic synchronization can be obtained by proper choice of the control parameters. Simulation results demonstrate the effectiveness of the proposed control method. & 2010 Elsevier B.V. All rights reserved.

Keywords: Synchronization Ghostburster HN control Neural networks

1. Introduction Studying nonlinear, dynamical aspects of neural systems have attracted a lot of attention in recent years [1–10]. Especially, great efforts have been devoted to synchronization control of neural systems, because the presence, absence or degree of synchronization can be an important part of the function or dysfunction of a neural system [2,3]. To study the nonlinear behavior of single neuron and neural networks, several nonlinear mathematical models such as Hodgkin–Huxley (HH) model [4], FitzHugh–Nagumo (FHN) model [5,6], Hindmarsh–Rose (HR) model [7,8] and Chay model [9] have been developed. In this paper, the Ghostburster model is introduced to verify the control algorithm we proposed. Ghostburster model is a two-compartment of the pyramidal cell derived from the electrosensory lateral line lobe (ELL) of weakly electric fish, which describes the dynamics between soma and dendrite of pyramidal cells and the dynamics of the excitable systems under different external fields [10,11]. Complex nonlinear behaviors such as limit cycle, phase-locking and chaos can be observed by exposing different external simulations on the individual Ghostburster neuron [12,13]. From the synchronization point of view, various techniques have been proposed to obtain stable synchronization between

 Corresponding author.

E-mail address: [email protected] (Y.K. Wong). 0925-2312/$ - see front matter & 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2010.03.004

identical and non-identical neuron systems, such as the active control [14], the fuzzy adaptive sliding mode control [15], the nonlinear control [16] and the adaptive fuzzy control [17]. In this paper, HN control is adopted via adaptive neural networks to synchronize two identical Ghostburster neuron systems under different external electrical stimuli. The radial basis function neural network (RBFNN) is usually used for modeling nonlinear functions because of their good capabilities in function approximation [18]. So RBFNN is employed firstly to approximate the uncertain nonlinear functions of the dynamical system and derive the update laws according to the Lyapunov stability theorem. Then the HN tracking technique will be used to attenuate the effects caused by unmodeled dynamics, disturbances, and approximate errors. The proposed controller not only guarantees closed-loop stability, but also assures the HN tracking performance for the coupled system. The rest of the paper is organized as follows: In Section 2, the complex nonlinear dynamics of individual Ghostburster neuron is studied. In Section 3, a master–slave system is created with Ghostburster model, and then an adaptive controller with HN tracking performance will be designed for chaos synchronization of the coupled Ghostburster neurons. Based on the Lyapunov stability theorem, the analysis of the stability for the proposed method is derived, and asymptotic synchronization can be obtained by proper choice of the control parameters. Simulation results in Section 4 show that the proposed algorithm can successfully realize the synchronization of two Ghostburster neurons. And the final conclusion is given in Section 5.

H.Y. Li et al. / Neurocomputing 74 (2010) 230–238

2. Dynamics of the Ghostburster model for individual neuron In literature [19], the Ghostburster model derived from the electrosensory lateral line lobe (ELL) of weakly electric fish has been described. The model neuron is comprised of an ispotential soma and a single dendritic compartment connected by an axial resistance, 1/gc, allowing for the electrotonic diffusion of currents from the soma (s) to dendrite (d) and vice versa [10]. Both somatic and dendritic compartments include the essential spiking currents: fast inward Na + (INa,d, INa,s) and outward delayed rectifying K + (IDr,s, IDr,d), and passive leak currents (Ileak). The presence of spiking currents in the dendrite enables the active backpropagation of somatic action potentials required for bursting [20]. Vs and Vd represent somatic and dendritic membrane potentials, respectively. Besides, the coupling between the two compartments is assumed to be joint through simple electrotonic diffusion currents from soma to dendrite Is/d, or vice versa Id/s.

231

The Ghostburster model comprised with six differential equations is described as follows: dVs ¼ Is þgNa,s m21,s ðVs Þð1ns ÞðVNa Vs Þ þ gDr,s n2s ðVK Vs Þ dt gc þ ðVd Vs Þ þ gleak ðVl Vs Þ k dns n1,s ðVs Þns ¼ dt tns dVd gc ¼ gNa,d m21,d ðVd Þhd ðVNa Vd Þ þ gDr,d n2d pd ðVK Vd Þ þ ðVs Vd Þ dt 1k þ gleak ðVl Vd Þ dhd h ðV Þhd dnd n ðV Þnd ¼ 1,d d ¼ 1,d d dt thd dt tn:d p1,d ðVd Þpd dpd ¼ ð1Þ dt tpd m is activation variable, and h, n, and p are inactivation variables. The parameter g is a maximal conductance (gmax, mS/cm2). Table 1 shows part of the parameter values, infinite conductance curves 1 x1,y ðVy Þ ¼ ðVy V1=2 Þ=l (x represents m, n, h, and p, respectively; y 1þe

Table 1 Ghostburster parameter values. Current

gmax (mS/cm2)

V1/2 (mV)

l (mV)

t (ms)

INa,s[nN,s(Vs)] IDr,s[mN,s(Vs)] INa,d[mN,d(Vd)/hN,d(Vd)] IDr,d[nN,d(Vd)/pN,d(Vd)]

55 20 5 15

 40  40  40/  52  40/  65

3 3 5/  5 5/  6

0.39 NA NA/1 0.9/5

represents s and d, respectively), which are used to describe the steady state conductance curves of the ionic currents (INa,s, IDr,s, INa,d, and IDr,d), where V1/2 is the half-deactivation voltage, l the deactivation slope, and t the deactivation time constant. Besides, NA means not applicable [20]. The rest of the system parameters values are chosen as follows: the ratio of somatic to total area k¼0.4; the reversal potentials VNa ¼40 mV, VK ¼  88.5 mV, Vleak ¼  70 mV; gleak ¼0.18; and Cm ¼1 mF/cm2.

40

10

20

0 -10 Vd (mV)

Vs (mV)

0 -20 -40

-20 -30 -40

-60

-50

-80

-60 0

50

100 t (ms)

150

200

40

0

50

0

50

100 t (ms)

150

200

100

150

200

10 0

20

-10 Vd (mV)

Vs (mV)

0 -20 -40

-20 -30 -40 -50

-60

-60

-80

-70 0

50

100 t (ms)

150

200

t (ms)

Fig. 1. Response curves of Vs and Vd when Is ¼ 6.5 mA ((a), (b)), and when Is ¼9 mA((c), (d)).

232

H.Y. Li et al. / Neurocomputing 74 (2010) 230–238

The initial value is chosen as [Vs0, ns0, Vd0, hd0, nd0, pd0]T ¼[0, 0, 0, 0, 0, 0]T. Then the external electrical current stimulation Is is varied. As Is is increased, various dynamical behaviors can be observed, such as limit cycles, quasi-periodicity, and chaos. Here, two typical dynamics are displayed: the period-1 oscillation when Is ¼6.5 mA, while the chaotic state when Is ¼9 mA. The corresponding response curves of the state variables Vs and Vd are shown in Fig. 1, and it can be seen clearly that they are qualitatively different.

Rewrite Eq. (4) as e_ ¼ Ae þf þ u

"

6 6 6 ¼6 6 4

Let us consider two Ghostburster neurons in a master–slave configuration, and design a controller such that the slave neuron with the subscript s can synchronize with the master neuron with the subscript m. The master–slave neuron system are expressed as follows: dVs,m ¼ Is,m þ gNa,s m21,s ðVs,m Þð1ns,m ÞðVNa Vs,m Þ þ gDr,s n2s ðVK Vs,m Þ dt gc þ ðVd Vs,m Þ þ gleak ðVl Vs,m Þ k dVd,m ¼ gNa,d m21,d ðVd,m Þhd ðVNa Vd,m Þ þgDr,d n2d pd ðVK Vd,m Þ dt gc ðVs,m Vd,m Þ þgleak ðVl Vd,m Þ ð2Þ þ 1k dVs,s ¼ Is,s þ gNa,s m21,s ðVs,s Þð1ns,s ÞðVNa Vs,s Þ þgDr,s n2s,s ðVK Vs,s Þ dt gc þ ðVd Vs,s Þ þ gleak ðVl Vs,s Þ þ u1 k dVd,s ¼ gNa,d n21,d ðVd,s Þhd,s ðVNa Vd,s Þ þ gDr,d n2d,s pd,s ðVK Vd,s Þ dt gc ðVs,s Vd,s Þ þ gleak ðVl Vd,s Þ þu2 þ 1k

ð3Þ

g  gc gc c e2 þ u1 ¼  þ gleak e1 þ e2 k k k

ðgNa,d m21,d ðVd,m Þhd,m ðVNa Vd,m Þ þ gDr,d n2d,m pd,m ðVK Vd,m ÞÞ

ðgNa,d m21,d ðVd,s Þhd,s ðVNa Vd,s Þ þ gDr,d n2d,s pd,s ðVK Vd,s ÞÞ ðgNa,d m21,d ðVd,m Þhd,m ðVNa Vd,m Þ þ gDr,d n2d,m pd,m ðVK Vd,m ÞÞ

7 7 7 7 7 5

Then the problem of synchronization between the neuron systems can be translated into a problem how to realize the asymptotical stability of the error system, i.e. limt-1 :eðtÞ: ¼ 0 by means of the controller u. 3.2. Adaptive neural network HN controller design In this section, the control objective is to design an HN controller via adaptive neural networks for the master–slave system such that the closed-loop stability is guaranteed. The RBFNN can be considered as a two-layer network in which the hidden layer behaves as a fixed nonlinear transformation with no adjustable parameters, i.e. the input space is mapped into a new space. The output layer then combines the outputs in the latter space linearly. Therefore, it belongs to a class of linearly parameterized networks. The following RBFNN [21] can be used to approximate any continuous function f(x):Rn-R ð6Þ n

where the input vector x A Ox  R with O is a compact set, the weight vector y A Oy  Rm with m is the NN node number, and the basis function f(x) is chosen as the commonly used Gaussian functions with fixed centers and widths. According to Sanner and Slotine [22], yTf(x) with sufficiently large numbers of NN nodes can approximate any continuous function f(x) over the compact set O to arbitrary precision in the form of f ðxÞ ¼ y

T

fðxÞ þ o, 8x A Ox

ð7Þ

where y* is the ideal constant weight vector and o is the approximate error.

:o:r o

ð8Þ

Typically, y* is chosen as the value, which minimizes :o: for all x A Ox , i.e.

y : ¼ arg min ½sup:y fðxÞf ðxÞ: T

ð4Þ

f1 ¼ ðIs,s Is,m Þ þ ðgNa,s m21,s ðVs,s Þð1ns,s ÞðVNa Vs,s Þ þ gDr,s n2s,s ðVk Vs,s ÞÞ f2 ¼ ðgNa,d m21,d ðVd,s Þhd,s ðVNa Vd,s Þ þ gDr,d n2d,s pd,s ðVK Vd,s ÞÞ

ðgNa,s m21,s ðVs,m Þð1ns,m ÞðVNa Vs,m Þ þ gDr,s n2s,m ðVK Vs,m ÞÞ

3

Assumption 1. . There is an unknown constant o* 40 over the compact set O, such that

where

ðgNa,s m21,s ðVs,m Þð1ns,m ÞðVNa Vs,m Þ þ gDr,s n2s,m ðVK Vs,m ÞÞ

ðIs,s Is,m Þ þ ðgNa,s m21,s ðVs,s Þð1ns,s ÞðVNa Vs,s Þ þ gDr,s n2s,s ðVk Vs,s ÞÞ

T

þ gDr,s n2s,s ðVk Vs,s ÞÞðgNa,s m21,s ðVs,m Þð1ns,m ÞðVNa Vs,m Þ

þ f1 þ u1   gc þgleak e2 þ ½ðgNa,d m21,d ðVd,s Þhd,s ðVNa Vd,s Þ e_ 2 ¼  ð1kÞ þ gDr,d n2d,s pd,s ðVK Vd,s ÞÞðgNa,d m21,d ðVd,m Þhd,m ðVNa Vd,m Þ gc gc e1 þ u2 ¼ e1 þ gDr,d n2d,m pd,m ðVK Vd,m ÞÞ þ ð1kÞ ð1kÞ gc þgleak Þe2 þf2 þ u2 ð ð1kÞ

#

fnn ðxÞ ¼ y fðxÞ

The terms u1 and u2 in Eqs. (2) and (3) are the added control forces such that the dynamical behavior of the slave neuron can track that of the master one. Defining the error system as the difference between the master and the slave neurons, i.e. e1 ¼Vs,s  Vs,m and e1 ¼Vd,s  Vd,m. Then, g  c þgleak e1 þ ½ðIs,s Is,m Þ þ ðgNa,s m21,s ðVs,s Þð1ns,s ÞðVNa Vs,s Þ e_ 1 ¼  k

þ gDr,s n2s,m ðVK Vs,m ÞÞ þ

f1 f2

2

3.1. Description of the Ghostburster master–slave system

T

where e¼[e1,e2] , u ¼[u1,u2] , and  g 2 gc 3 c þ gleak  k k 6 g 7 A ¼ 4 gc 5 c  þ gleak 1k k



3. Synchronization of the Ghostburster systems via HN adaptive control

ð5Þ T

y A Oy

ð9Þ

x A Ox

Since y* is generally unknown and needs to be estimated in  controller design, let y^ be the estimate of y*, and denote y~ ¼ y y^ as the weight estimate error vector. In this paper, two RBFNNs yT1f1(x) and yT2f2(x) are used to approximate the continuous function f1 and f2, respectively. Then the error dynamics Eq. (4) becomes T e_ ¼ Ae þ½y fðxÞ þ u þ o

ð10Þ

H.Y. Li et al. / Neurocomputing 74 (2010) 230–238

120

s,m

60 40

10

V 20 150 t(ms)

200

20 0

V

e2(mV)

80

V

s,s

30 20 10 0 -10 100

Vd,m Vd,s (mV)

V

e1(mV)

40

100

Vs,m Vs,s (mV)

233

d,s d,m

5 0 -5 100

150 t(ms)

0

200

-20

-20 -40

-40 -60

-80

-60 0

50

100 t (ms)

150

200

0

50

100 t (ms)

150

200

0

50

100 t (ms)

150

200

1

0.5

0.8

0.4

0.6 0.3

0.4

0.2

θ2

θ1

0.2 0

0.1

-0.2

0

-0.4 -0.1

-0.6

-0.2

-0.8 0

50

100 t (ms)

150

200

Fig. 2. The synchronization of two Ghostburster neurons (master one and slave one) via adaptive neural networks control. (a) Responses curves of state variables Vs,m, Vs,s, and the corresponding error e1 ¼ Vs,s  Vs,m. (b) Responses curves of state variables Vd,m, Vd,s, and the corresponding errors e2 ¼Vd,s  Vd,m. (c) The time curves of the weight vector of Neural Network 1, i.e. y1. (d) The time curves of the weight vector of Neural Network 2, i.e. y2. The control actions are implemented after 100 ms.

chosen as

where "

#

e1 , e¼ e2

"

#

u1 u¼ , u2

"

#

"

#

2

3

f1 y 1 0 o1 T 5 , f¼ , y ¼4 o¼ T f2 o2 y 0 T

2

ð11Þ

Assumption 2. . The lumped uncertainty is assumed [23] such that o A L2 ½0,T, 8T A ½0,1. To this end, a controller is obtained in Theorem 1, below which guarantees HN tracking performance [23] for the overall system with uncertain nonlinear functions f1 andf2, and without prior knowledge on the upper bound of the lumped uncertainties. Theorem 1. . Consider the error dynamics Eq. (4) with uncertain nonlinear functions f1 and f2, which are approximated as Eq. (7). Suppose Assumption 1 and 2 are both satisfied and the controller is

T 1 u ¼ kT ey^ fðxÞ 2 Pe 2r

where " k1 k¼ k3

k2

# ,

k4

" P¼

ð12Þ

P1

0

0

P2

# ,

" ^ ^y ¼ y 1 0

0 y^

# 2

The feedback gain vector k is chosen such that A1 ¼ A k satisfies Hurwitz condition. r 40 is an attenuate level. The matrix P40 (P1 and P2 are constants) is chosen as the solution of Lyapunov matrix equation: AT1 P þPA1 ¼ Q T

ð13Þ

where Q¼Q 40 is a given matrix. Choose the following adaptive update law for y^ 1 and y^ 2 : 2 3 " # _^ G1 f1 ðxÞP1 e1 0 ^y_ ¼ 4 y 1 0 5 ¼ ð14Þ _ G2 f2 ðxÞP2 e2 0 y^ 2 0

234

H.Y. Li et al. / Neurocomputing 74 (2010) 230–238

40

80

V

s,s s,m

Vs,m Vs,s (mV)

60

-0.01

V

20

-0.02 -0.03 100

40

0.2

V

150 t (ms)

200

20 0

d,s d,m

e2(mV)

V

Vd,m Vd,s (mV)

100

e1 (mV)

120

0 -0.2 100

0

150 t (ms)

200

-20

-20 -40

-40

-60 -80

-60

0

50

100 t (ms)

150

200

0

50

100 t (ms)

150

0

50

100 t (ms)

150

200

0.15

0.15

0.1 0.1 0.05 0.05 θ1

θ2

0 -0.05

0

-0.1 -0.05 -0.15 -0.2

-0.1

0

50

100 t (ms)

150

200

200

Fig. 3. The synchronization of two Ghostburster neurons (master one and slave one) via adaptive RBFNN HN control. (a) Responses curves of state variables Vs,m, Vs,s, and the corresponding error e1 ¼Vs,s  Vs,m. (b) Responses curves of state variables Vd,m, Vd,s, and the corresponding errors e2 ¼Vd,s  Vd,m. (c) The time curves of the weight vector of RBFNN 1, i.e. y1. (d) The time curves of the weight vector of RBFNN 2, i.e. y2. The control actions are implemented after 100 ms.

where G1 40 and G2 40 are constants. Then the HN tracking performance [23] for overall system satisfies the following relationship: Z 1 T T 1 1 T 1 ^T ^ ^ e ðtÞQeðtÞ dt r eT ð0ÞPeð0Þþ y^ 1 ð0ÞG1 y ð0ÞG1 1 y 1 ð0Þ þ 2 y 2 ð0Þ 2 0 2 2 2 2 Z T Z T 1 1 o21 ðtÞ dt þ r2 o22 ðtÞ dt ð15Þ þ r2 2 2 0 0

Proof. Consider the Lyapunov function T T 1 1~ ~ ~ V ¼ 12 eT Pe þ 12 y~ 1 G1 1 y 1 þ 2y 2 G2 y 2

ð16Þ

Differentiating (16) with respect to time and noting (10) and (12), T 1 _^ þ y~ T G1 y_^ V_ ¼ ½eT P e_ þ e_ T Pe þ y~ 1 G1 2 2 2 1 y1 2

¼

  _  T 1 1 e½AT1 P þ PA1 e þ  2 Pe þ o Pe þ y~ 1 G1 y^ 1 G1 f1 ðxÞPe1 1 2 2r _  T ð17Þ y^ 2 G2 f2 ðxÞPe2 þ y~ 2 G1 2

Then by substituting (13) and (14) into (17), the following is obtained:

T 1 _^ þ y~ T G1 y_^ V_ ¼ ½eT P e_ þ e_ T Pe þ y~ 1 G1 2 2 2 1 y1 2   1 1 1 ¼  eT Qe þ  2 ðP1 e1 Þ2 þ o1 P1 e1 þ  2 ðP2 e2 Þ2 þ o2 P2 e2 2 2r 2r  2 1 1 1 1 1 ðro1 Þ2  P1 e1 ro1 þ ðro2 Þ2 ¼  eT Qe þ 2 2 2 r 2  2 ! 1 1 1 1 1  P2 e2 ro2 ð18Þ r eT Qeþ ðro1 Þ2 þ ðro2 Þ2 2 r 2 2 2

H.Y. Li et al. / Neurocomputing 74 (2010) 230–238

80

V

s,s s,m

Vs,m Vs,s (mV)

60

V 40

0

-0.05 50

40

100

150

200

t (ms)

20 0

Vd,m Vd,s (mV)

V

e 1 (mV)

100

60

0.05

-20 -40

V

0.2

d,s d,m

20

e 2 (mV)

120

235

0

-0.2 50

100

150

200

t (ms)

0 -20 -40

-60 -60

-80 0

50

100 t (ms)

150

200

0

0.15

50

100 t (ms)

150

200

0.15 0.1

0.1

0.05 0 θ2

θ1

0.05

0

-0.05 -0.1 -0.15

-0.05

-0.2 -0.1

-0.25 0

50

100

150

200

0

50

t (ms)

100

150

200

t (ms)

Fig. 4. The synchronization of two Ghostburster neurons (master one and slave one) via adaptive RBFNN HN control under the parameter perturbation. (a) Responses curves of state variables Vs,m, Vs,s and the corresponding error e1 ¼ Vs,s  Vs,m. (b) Responses curves of state variables Vd,m, Vd,s, and the corresponding errors e2 ¼Vd,s  Vd,m. (c) The time curves of the weight vector of RBFNN 1, i.e. y1. (d) The time curves of the weight vector of RBFNN 2, i.e. y2. The control actions are implemented after 50 ms, and the perturbation in the form of random noise (intensity is chosen as 0.01) is introduced into the parameter term gleak of the slave system after 100 ms.

By Assumption 2, and integrating both sides of (17) from t ¼0 to t ¼T, VðTÞVð0Þ r 

1 2

Z

T

eT Qe dt þ

0

1 2 r 2

Z

T 0

1 2

o21 ðtÞ þ r2

Z

T 0

o22 ðtÞ

ð19Þ

Since V(T)Z0, then

 þ

1 2

Z

T 0

eT Qe dt r Vð0Þ þ

1 2 r 2

Z

T 0

1 2

o21 ðtÞ þ r2

1 ~T 1 ~T 1 2 ~ ~ y ð0ÞG1 y ð0ÞG1 r 1 y 1 ð0Þ þ 2 y 2 ð0Þ þ 2 1 2 2 2

Z

T

Z

0 T 0

1 2 Z

o22 ðtÞ dt r eT ðOÞPeð0Þ 1 2

o21 ðtÞ þ r2

T 0

o22 ðtÞ dt ð20Þ

Thus the HN tracking performance [23] is achieved for a prescribed attenuation level r, and the synchronization of system (2) and (3) can be obtained.

4. Simulation results In this section, numerical simulations are carried out for the global synchronization of the Ghostburster neuron systems via the proposed adaptive neural network HN control. The periodic neuron is chosen as master and the chaotic one as slave in order to control chaotic behavior to be regular. According to the results of Section 2, the stimulus of the master neuron system is chosen as Is ¼6.5 mA, and the stimulus of slave system as Is ¼9 mA. The control parameters are chosen as 2 3 9 5   6 50 2 7 1 0 7, Q ¼ k¼6 45 9 5 0 1 3 50 then  0:2 P¼ 0

0 0:3



H.Y. Li et al. / Neurocomputing 74 (2010) 230–238

and r ¼0.02. The initial states are chosen as T

T

½Vs,m0 , ns,m0 , Vd,m0 , hd,m0 , nd,m0 , pd,m0  ½ ¼ 1, 0, 1, 0, 0, 0 ½Vs,s0 , ns,s0 , Vd,s0 , hd,s0 , nd,s0 , pd,s0 T ½ ¼ 0, 0, 0, 0, 0, 0T

and the initial weight vectors are yT1(0) ¼0 and yT2(0) ¼0. According to Sanner and Slotine [22], the centers and widths are chosen on a regular lattice in the respective compact sets in our simulations. The more the number of the neurons in RBFNN is, the higher the approximate precision is. But in this paper, by taking the numerical simulation efficiency into account, neural T

networks y^ 1 f1 ðxÞ contains 34 nodes (i.e. the inputs of NN are Vs,m, ns,m, Vs,s, and ns,s; the centers of f1(x) are uniformly distributed in the space [ 80, 40]  [0, 0.9]  [  80, 40]  [0, 0.9]; the widths of T f1(x) are 40) and neural networks y^ 2 f2 ðxÞ contains 37 nodes (i.e.

the inputs of NN are Vd,m, hd,m, nd,m, pd,m, Vd,s, hd,s, and pd,s; the centers of the basis function f2(x) are uniformly distributed in the space [ 70, 10]  [0, 0.9]  [0, 0.9]  [0, 0.25]  [  70, 10]  [0.1, 0.8]  [0.1, 0.25]; the widths of f2(x) are 40).

120 V

s,s s,m

Vs,m Vs,s (mV)

60

40

0

-0.05 50

40

0.2

V

100

150

200

t (ms)

20 0

Vd,m Vd,s (mV)

80

60

0.05 e 1 (mV)

100

V

Firstly, the adaptive RBFNN control without HN is exerted on the two Ghostburster neurons. Fig. 2 shows the response curves of state variables and the corresponding errors. As is shown, before the control implemented, the two neurons exhibit their own dynamical behaviors and bear no relation to each other. After the controller is applied, the slave neuron can follow the master one’s track, but the errors are obviously large. Next, the adaptive RBFNN with HN control is carried out. As is shown in Fig. 3, after the controller is implemented, the slave neuron follows the master one’s track immediately and the errors converge to nearly zero (|e1| is confined to [0 mV, 0.03 mV], and |e2| is confined to [0, 0.2 mV]) compared with Fig. 2. Compared with RBFNN control method, the errors are obviously reduced by adding the HN control, because HN control can guarantee that the approximate error of system is confined to a very small range. In order to make further exploration of the validity and effectiveness of the proposed method in this paper, the input disturbance and the parameter perturbation are introduced into Ghostburster neurons. Figs. 4 and 5 show the response curves of state variables and corresponding errors when the perturbation is added

-20 -40

V

d,s d,m

20

e 2 (mV)

236

0

-0.2 50

100

150

200

t (ms)

0 -20 -40

-60 -80

-60

0

50

100 t (ms)

150

200

0.15

0

50

100 t (ms)

150

0

50

100 t (ms)

150

200

0.15 0.1

0.1

0.05 0

θ2

θ1

0.05

0

-0.05 -0.1 -0.15

-0.05

-0.2 -0.1

-0.25 0

50

100 t (ms)

150

200

200

Fig. 5. The synchronization of two Ghostburster neurons (master one and slave one) via adaptive RBFNN HN control under input disturbance. (a) Responses curves of state variables Vs,m, Vs,s, and the corresponding error e1 ¼ Vs,s  Vs,m. (b) Responses curves of state variables Vd,m, Vd,s, and the corresponding errors e2 ¼Vd,s  Vd,m. (c) The time curves of the weight vector of RBFNN 1, i.e. y1. (d) The time curves of the weight vector of RBFNN 2, i.e. y2. The control actions are implemented after 50 ms, and the disturbance in the form of random noise (intensity is chosen as 0.1) is introduced into the input stimulus Is of the slave system after 100 ms.

H.Y. Li et al. / Neurocomputing 74 (2010) 230–238

80

V

s,s s,m

Vs,m Vs,s (mV)

60

40

0

-0.01 100

40

0.1

V

150 t (ms)

200

20 0

Vd,m Vd,s (mV)

V

e1 (mV)

100

60

0.01

-20 -40

V

d,s d,m

20

e2 (mV)

120

237

0

-0.1 100

150 t (ms)

200

0 -20 -40

-60 -80

-60 50

100 t (ms)

150

200

0.04

0.04

0.035

0.02

0.03

0

0.025

-0.02

0.02

-0.04

θ2

θ1

0

0.015

-0.06

0.01

-0.08

0.005

-0.1

0

-0.12

0

50

0

50

100 t (ms)

150

200

-0.14

-0.005 0

50

100 t (ms)

150

200

100 t (ms)

150

200

Fig. 6. The synchronization of two Ghostburster neurons (master one and slave one) via adaptive RBFNN HN control under the parameter perturbation. (a) Responses curves of state variables Vs,m, Vs,s, and the corresponding error e1 ¼Vs,s  Vs,m. (b) Responses curves of state variables Vd,m, Vd,s, and the corresponding errors e2 ¼ Vd,s  Vd,m. (c) The time curves of the weight vector of RBFNN 1, i.e. y1. (d) The time curves of the weight vector of RBFNN 2, i.e. y2. The control actions are implemented after 100 ms, and the perturbation in the form of random noise (intensity is chosen as 0.01) is introduced into the parameter term gleak of the slave system after 100 ms. r ¼0.01.

into the term gleak and the disturbance into input stimulus term, respectively. The perturbation and the disturbance are both defined as Gaussian white noise. Compared with Fig. 3, it can be seen clearly that although different kinds of disturbances are suddenly introduced into systems, the proposed controller can still keep good performance that the synchronization errors are constrained into a very small region. The synchronization error can be contracted into smaller region by adjusting the attenuate level r, which is shown in Fig. 6 that errors are obviously reduced by decreasing r. Therefore the control effect is still in good working compared with the one without disturbances.

5. Conclusions In this paper, synchronization of two Ghostburster neurons under external electrical stimulation via adaptive neural network HN control has been investigated. RBF neural networks are employed to approximate the uncertain nonlinear parts of the synchronization error system, and HN control is used to eliminate the approximate errors, ionic channel noise, and disturbances.

Firstly, periodic and chaotic dynamics of individual Ghostburster neuron under different external electrical stimulation are studied. Secondly, the HN adaptive schemes are applied to synchronize the two Ghostburster neuron systems in master–slave structure. According to the Lyapunov stability theorem, stability of close loop error system can be guaranteed by proper choice of the control parameters, which means the asymptotic synchronization of the master–slave system. Finally, numerical simulations demonstrate that the proposed control methods can effectively make the slave system act as the master one even if the system is exposed to disturbances. In the proposed control method, the synchronization error can be further reduced by adjusting the control gain, increasing the number of the neurons in RBFNN, etc.

Acknowledgments The authors gratefully acknowledge the support of the Hong Kong Polytechnic University.

238

H.Y. Li et al. / Neurocomputing 74 (2010) 230–238

References [1] C. Chow, J. White, J. Ritt, N. Kopell, Frequency control in synchronous networks of inhibitory neurons, J. Comput. Neurosci. 5 (1998) 407–420. [2] Peter. Ashwin, Synchronization from chaos, nonlinear dynamics, Nature 422 (2003) 384–385. [3] J. Lian, J. Shuai, D.M. Durand, Control of phase synchronization of neuronal activity in the rat hippocampus, J. Neural Eng. 1 (2004) 46–54. [4] L. Hodgkin, A.F. Huxley, A quantitative description of membrane and its application to conduction and excitation in nerve, J. Physiol. 117 (1952) 500–544. [5] Y.D. Sato, M. Shiino, Spiking neuron models with excitatory or inhibitory synaptic couplings and synchronization phenomena, Phys. Rev. E 66 (2002) 041903. [6] J. Wang, B. Deng, K.M. Tsang, Chaotic synchronization of neurons coupled with gap junction under external electrical stimulation, Chaos Solitons Fractals 22 (2004) 469–476. [7] W. Wang, G. Perez, A. Hilda, Dynamical behavior of the firings in a coupled neuronal system, Phys. Rev. E 47 (1993) 2893–2898. [8] M. Dhamala, V.K. Jirsa, M.Z. Ding, Transitions to synchrony in coupled bursting neurons, Phys. Rev. Lett. 92 (2004) 028101. [9] Q.Y. Wang, Q.S. Lu, G.R. Chen, D.H. Guo, Chaos synchronization of coupled neurons with gap junctions, Phys. Lett. A 356 (2006) 17–25. [10] B. Doiron, C. Laing, A. Longtin, Ghostbursting: a novel neuronal burst mechanism, J. Comput. Neurosci. 12 (2002) 5–25. [11] C.R. Laing, B. Doiron, A. Longtin, L. Maler, Ghostburster: the effects of dendrites on spike patterns, Neurocomputing 44–46 (2002) 127–132. [12] J. Wang, B. Deng, K.M. Tsang, Chaotic synchronization of neurons coupled with gap junction under external electrical stimulation, Chaos Solitons Fractals 22 (2004) 469–476. [13] B. Deng, J. Wang, F. Xiangyang, Synchronizing two coupled chaotic neurons in external electrical stimulation using backstepping control, Chaos Solitons Fractals 29 (2006) 182–189. [14] Md. Haeri, A.A. Emadzadeh, Synchronizing different chaotic systems using active sliding mode control, Chaos Solitons Fractals 31 (2007) 119–129. [15] H. layeghi, M.T. Arjmand, H. Salarieh, A. Alasty, Stabilizing periodic orbits of chaotic systems using fuzzy adaptive sliding mode control, Chaos Solitons Fractals 37 (2008) 1125–1135. [16] J. Wang, T. Zhang, B. Deng, Synchronization of FitzHugh–Nagumo neurons in external electrical stimulation via nonlinear control, Chaos Solitons Fractals 31 (2007) 30–38. [17] J. Wang, Z. Zhang, H. Li, Synchronization of FitzHugh–Nagumo systems in EES via HN variable universe adaptive fuzzy control, Chaos, Solitons & Fractals 36 (2008) 1332–1339. [18] R.M. Sanner, J.E. Slotine, Gaussian networks for direct adaptive control, IEEE Trans. Neural Networks 3 (1992) 837–863. [19] C.M. Gray, P. Konig, A.K. Engel, W. Singer, Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties, Nature 338 (1989) 334–337. [20] M. Oswald Anne-Marie, J. Chacron Maurice, B. Doiron, J. Bastian, M. Leonard, Parallel processing of sensory input by bursts and isolated spikes, J. Neurosci. 24 (18) (2004) 4351–4362. [21] S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd ed, Upper Saddle River, NJ, Prentice-Hall, 1999. [22] R.M. Sanner, J.E. Slotine, Gaussian networks for direct adaptive control, IEEE Trans. Neural Networks 3 (1992) 837–863. [23] B.S. Chen, C.H. Lee, Y.C. Chang, HN tracking design of uncertain nonlinear SISO system: adaptive fuzzy approach, IEEE Trans. Fuzzy Systems 4 (1996) 32–43.

H.Y. Li received her Ph.D. degree from Tianjin University in 2007. Now she is now a Lecturer in the Automation and Electrical Engineering College, Tianjin University of Technology and Education, 3002222, P.R. China. Her major research interests are in nonlinear systems and neural networks.

Y.K. Wong received his B.Sc. and M.Sc. degrees from the University of London, and his Ph.D. degree from the Heriot-Watt University, UK. He joined the Hong Kong Polytechnic University in 1980. His current research interests include modeling, simulation, and intelligent control.

W.L. Chan received his B.Sc. (Eng) and M.Phil. degree from the University of Hong Kong, in 1988 and 1993, respectively. He then received his Ph.D. degree from the City University London in 2000. He is now an Associate Professor in the Department of Electrical Engineering, The Hong Kong Polytechnic University. His major research interests are in microprocessor applications and applications of artificial intelligence.

K.M. Tsang received his B.Eng. and Ph.D. degrees in Control Engineering from the University of Sheffield, UK, in 1985 and 1988, respectively. At present, he is an Associate Professor in the Department of Electrical Engineering of the Hong Kong Polytechnic University. His research interests include system identification, fuzzy logic, adaptive control, and pattern recognition.