Variable-structure-systems based approach for online learning of spiking neural networks and its experimental evaluation

Variable-structure-systems based approach for online learning of spiking neural networks and its experimental evaluation

Available online at www.sciencedirect.com Journal of the Franklin Institute 351 (2014) 3269–3285 www.elsevier.com/locate/jfranklin Variable-structur...

803KB Sizes 0 Downloads 31 Views

Available online at www.sciencedirect.com

Journal of the Franklin Institute 351 (2014) 3269–3285 www.elsevier.com/locate/jfranklin

Variable-structure-systems based approach for online learning of spiking neural networks and its experimental evaluation Y. Oniza,n, O. Kaynaka,b a

Electrical and Electronics Engineering Department, Bogazici University, Istanbul, Turkey b Harbin Institute of Technology, Harbin, China

Received 6 August 2013; received in revised form 6 March 2014; accepted 6 March 2014 Available online 15 March 2014

Abstract Raising the level of biological realism by utilizing the timing of individual spikes, spiking neural networks (SNNs) are considered to be the third generation of artificial neural networks. In this work, a novel variable-structure-systems based approach for online learning of SNN is developed and tested on the identification and speed control of a real-time servo system. In this approach, neurocontroller parameters are used to define a time-varying sliding surface to lead the control error signal to zero. To prove the convergence property of the developed algorithm, the Lyapunov stability method is utilized. The results of the real-time experiments on the laboratory servo system for a number of different load conditions including nonlinear and time-varying ones indicate that the control structure exhibits a highly robust behavior against disturbances and sudden changes in the command signal. & 2014 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

1. Introduction Artificial neural networks (ANNs) are widely considered among the most common and effective approaches to model-free design. The first generation of artificial neural networks consists of McCulloch–Pitts neurons [1]. In this conceptually very simple model, a neuron sends n

Corresponding author. E-mail addresses: [email protected] (Y. Oniz), [email protected] (O. Kaynak).

http://dx.doi.org/10.1016/j.jfranklin.2014.03.002 0016-0032/& 2014 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

3270

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

a binary ‘high’ signal if the sum of its weighted incoming signals is above a threshold value. Neurons of the second generation use a continuous activation function to compute their output signals instead of a step or threshold function [2]. This feature makes them suitable for use in systems with analog inputs and outputs. The third generation of ANNs enhances the level of biological realism by utilizing the spiking neuron (SN) models [3]. Unlike the first two generations, in which the outputs of the network can be considered as the normalized firing rates of the neuron within a certain time interval, SN models make direct use of the temporal information of the individual spikes whereas the magnitude contains no information. In one of the leading works on the artificial neural networks utilizing SN models, Maass [4] states that a network consisting of spiking neurons can be applied to any problem that is solvable by the first two generations and this network has at least the same computational power as the neural networks consisting of perceptrons or sigmoidal neurons. Furthermore, it has been shown that spiking neural networks (SNNs) are capable of processing a considerable amount of data with a relatively small number of spikes [5]. The above-mentioned advantages and an increasing interest in temporal computation have encouraged the use of SNNs in a wide variety of applications, including classification [6–9], pattern recognition [10,11] and control [12–15]. In this paper, SNNs are considered for the identification and the control of dynamic plants. The supervised learning of SNNs has been considered in many research works [16] and most of these have concentrated on the development of gradient ascent/descent-based learning algorithms, as in the case of the traditional ANNs. In [6,17], a gradient descent-based learning algorithm called SpikeProp is presented and received wide acceptance as the proposed method is analogous to the backpropagation algorithm [18] used in the learning of the second generation neural networks. To improve the performance of SpikeProp, a number of modifications are proposed to the basic algorithm such as inclusion of a momentum term [19], QuickProp [20], or enabling multiple firing [21]. Although the modified learning methods yield an increase in the convergence properties to some extent, as with most of the gradient-based learning algorithms, the convergence speed is still relatively slow compared to other approaches and the algorithm can be easily trapped in local optima [7]. Furthermore, a number of numerical robustness issues also need to be taken into account when recursive estimation algorithms are applied over long periods of time [22]. The variable structure systems (VSSs) approach provides the means for a robust and stable control system through a comprehensive stability analysis and its high performance in the control of systems confronted with uncertainties and imprecision promotes the research activities in developing new VSS-based training algorithms to be used in computationally intelligent structures [23–28]. In this study, the VSSs theory is utilized in the derivation of parameter update rules for SNNs. Stable online tuning of the parameters and a fast learning speed are the prominent characteristics of the proposed algorithm. Unlike the gradient descentbased learning algorithms, which aim to minimize an error function, in the proposed approach a sliding surface is established using the SNN controller parameters. The Lyapunov stability method is adopted in the derivations of the conditions to enforce the learning error toward zero in a stable manner. The paper is organized as follows: Section 2 describes the structure of the SNN. In Section 3, the VSS-based learning algorithms are derived for SNNs. Section 4 describes the generation of the spike times from the analog data. An identification and a control application of SNN along with the description of the experimental setup are given in Section 5. Finally Section 6 describes corresponding conclusions and future work.

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

3271

2. Spiking neural networks Taking a closer look at the working principle of a biological neuron might be helpful in gaining a deeper insight into the functioning of a spiking neuron. In a biological neural network, the dendrites of a postsynaptic neuron collect electrical signals coming from the neighboring presynaptic neurons and transmit these signals to the soma of the postsynaptic neuron. If the sum of the incoming weighted signals is above a threshold value, then a spike (also called action potential) is generated and propagated along the axon and its branches to other neurons. The junction between the axon branches of a presynaptic neuron and the dendrites of a postsynaptic neuron is called a synapse. When an action potential arrives at a synapse, it triggers a complex chain of biochemical reactions causing a change in the potential of the postsynaptic neuron. This process is relatively slow and it is associated with a certain delay specific to that synapse. To formulate the information transmission between spiking neurons, various neuron models, including the Integrate-and-Fire Model, Izhikevich Model, Hodgkin–Huxley Model, and the Spike Response Model, have been proposed throughout the literature. These neuron models differ from each other with regard to their degree of biological accuracy and computational efficiency. In this paper, the Spike Response Model (SRM) [29], which is extensively used in many research works owing to its mathematical simplicity, is utilized to describe the generation of a new spike train from incoming spikes. The general structure of a Spiking Neural Network is presented in Fig. 1. Unlike the network structures of the second generation, multiple synapses are involved in each connection between a presynaptic and a postsynaptic neuron. In this model each synapse is associated with a different delay and adjustable weight. The different delay times are introduced so that the spike emitted by a presynaptic neuron will have an effect on the postsynaptic neuron potential for a longer period of time. Studies on the biological neurons [30,31] show that the spikes of a neuron have similar shapes and the information is encoded in the number and the precise timing of the spikes rather than in their form. Neural networks consisting of spiking neurons attempt to imitate this phenomenon by using the precise timing of the spikes as the means of communication and neural computation instead of utilizing their magnitudes. Thus, the inputs and outputs of a spiking neuron are the temporal information of the spikes, whose magnitude information is omitted. Throughout this study, it is assumed that each neuron is capable of generating a single spike during the simulation interval when its membrane potential reaches a predefined threshold value θ for the first time. Suppose that a presynaptic neuron i fires a spike at time ti. This spike will be transmitted to the postsynaptic neuron j by the kth synapse at time t i þ dk , where dk denotes the delay for the kth synapse. To describe the resulting membrane potential of the postsynaptic

Fig. 1. The structure of a SNN.

3272

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

neuron j at time t, the following spike response function is employed: N

K

N

K

xj ðtÞ ¼ ∑ ∑ wkij εðt  t i  d k Þ ¼ ∑ ∑ wkij yki ðtÞ i¼1k ¼1

i¼1k ¼1

ð1Þ

In this formulation N corresponds to the number of presynaptic neurons, whereas K denotes the number of synapses. dk and wkij are the delay and the weight coefficient for the kth synapse between the presynaptic neuron i and the postsynaptic neuron j, respectively. The spike response function εðtÞ describes the effect of incoming spikes on the internal state of the postsynaptic neuron. Despite the fact that there are many different mathematical formulations available, the general shape of the spike response function consists of a short rising part and a long decaying part. The formulation given in [6] is adopted in this study: 8 < t e1  t=τ t40 εð t Þ ¼ τ ð2Þ :0 tr0 The decay time constant τ models the rise and decay time of the membrane potential. The postsynaptic neuron j fires a spike at time tj when its membrane potential crosses the predetermined threshold value θ for the first time (that is xj ðtÞZ θ). Thus, the firing time tj is a nonlinear function of the state variable xj: t j ¼ t j ðxj Þ. The threshold θ is constant and equal for all neurons in the network. 3. Parameter update rules Gradient-based learning methods have been frequently used in NN-based control applications to determine the proper values of the synaptic weights for the generation of the desired output signals [32–35]. Despite the fact that numerous successful applications have been reported throughout the literature, these learning algorithms can very often lead to suboptimal performances in terms of the convergence speed, robustness, and computational burden. Furthermore, the stability of the learning process is not guaranteed as the error surface includes many local minima and the tuning process may easily get stuck into one of these [36]. The use of evolutionary approaches has been suggested to address the problems mentioned [37–41], but the stability of such approaches is questionable and the optimal values for the stochastic operators are difficult to derive. Additionally, the computational burden can be very high. The shortcomings with the evolutionary approaches give rise to the development of sliding mode control (SMC) theory-based algorithms for the adaptation of the neural networks' weights [23–26,28,42–44], by means of which robust system response in handling the uncertainties and imprecision and faster convergence than the traditional learning techniques in online tuning of ANNs can be ensured. SMC has attracted the attention of many researchers in the field of computational intelligence owing to its versatile characteristics such as robustness to external disturbances, parameter variations and uncertainties [45,46]. The underlying idea of this approach is to restrict the motion of the system in a plane referred to as the sliding surface, where the predefined function of the error is zero [47]. Let us consider the single input dynamic system: yðnÞ ¼ f ðxðtÞÞ þ gðxðtÞÞuðtÞ

ð3Þ

In this formulation x(t) refers to the state vector, u(t) to the control input, and y to the output state of the system. The order of the differentiation is denoted by ðnÞ. We further assume that f ðxðtÞÞ and gðxðtÞÞ are some nonlinear functions of time and system's states. The discrepancy between

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

3273

the desired and the actual output states of the system and its derivatives are widely used to construct a sliding surface as follows [48]:  ðn  1Þ   d þλ y  yd ¼ 0 sðxðt ÞÞ ¼ ð4Þ dt where λ is a strictly positive constant. As can be seen, sliding mode control enables to replace an n-dimensional tracking problem by a first order stabilization problem in s. The sliding surface s(x) will be attractive if [49]:

 

Any trajectory starting on the surface remains there. Any trajectory starting outside the surface tends to the sliding surface at least asymptotically.

The first condition requires if sðxðtÞÞ ¼ 0, then its derivative should be s_ ðxðtÞÞ ¼ 0 as well. The second condition can be satisfied if lim s_ o0

s-0þ

and

lim s_ 40

s-0 

ð5Þ

This is referred as the reachability condition, and to enforce it Lyapunov stability method is applied [50]. Consider the globally positive definite Lyapunov function candidate 1 V ðx; t Þ ¼ s2 Z 0 2

ð6Þ

This equality holds if and only if s ¼ 0. If we recall the Lyapunov stability theorem, the derivative of this function should be negative definite to guarantee the stability, the equality sign being applicable if and only if s ¼ 0: ∂s V_ ðx; t; sÞ ¼ s r 0 ∂t

ð7Þ

Thus, if we define a Lyapunov candidate function, which satisfies the reachability condition, then it is guaranteed that the state trajectories converge to the sliding surface and move along it for all succeeding time. The proposed incremental learning algorithm makes direct use of the variable structure systems theory and establishes a sliding motion in terms of the neural network's parameters. A sliding surface is constructed by letting s to be equal to the discrepancy between the actual and the desired firing times/values of the output neuron. This selection of the sliding surface corresponds to n¼ 1 in Eq. (4). Next, the weight update rules are derived in such a way that the multiplication of the sliding surface and its derivative, i.e. error times the derivative of error, always result in a negative value, which is consistent with the reachability condition. Therefore, unlike the gradient descentbased learning algorithms, the convergence of this approach is always guaranteed. In the following subsection, SMC-based learning algorithms will be derived for SNNs with two layers. 3.1. Two layered SNN Consider a two-layered feedforward spiking neural network. It is assumed that there are n neurons in the input layer, m neurons in the hidden layer and only one neuron in the output layer. The neuron in the output layer is considered with a linear activation function to eliminate the need for decoding and thus reducing the computational burden.

3274

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

We will use the following definitions:

    

t h ðtÞ ¼ ½t h1 ðtÞ t h2 ðtÞ … t hn ðtÞT – vector of the firing times of the input neurons. t i ðtÞ ¼ ½t i1 ðtÞ t i2 ðtÞ … t im ðtÞT – vector of the firing times of the hidden neurons. τj ðtÞ – scalar output of the network. wkhi ðtÞ – time-varying weight of the connections between the neuron h in the input layer and the node i in the hidden layer for the synaptic terminal k. wij ðtÞ – time-varying weight of the connections between the neuron i in the hidden layer and the node j in the output layer.

It is assumed that the vectors t h ðtÞ and t i ðtÞ as well as their derivatives t_h ðtÞ ¼ _ _ ðtÞ … t hn _ ðtÞT and t_i ðtÞ ¼ ½t i1 _ ðtÞ t _i2 ðtÞ … t im _ ðtÞT are bounded. The scalar signal τdj ðtÞ ½t h1 ðtÞ t h2 represents the time-varying desired output of the neural network. It will be assumed that τdj ðtÞ and τ_ dj ðtÞ are also bounded signals. We define the learning error e(t) as the discrepancy between the actual output of the neural network τj and the desired output τdj . Using the SMC approach, we define the zero value of the learning error coordinate e(t) as a time-varying sliding surface: SðeðtÞÞ ¼ eðtÞ ¼ τj ðtÞ  τdj ðtÞ ¼ 0

ð8Þ

which is the condition that guarantees that the neural network output τj ðtÞ coincides with the desired output signal τdj ðtÞ for all time t4t hit , where thit is the hitting time of eðtÞ ¼ 0. Definition. A sliding motion will take place on a sliding manifold SðeðtÞÞ ¼ eðtÞ ¼ 0, after time _ ¼ eðtÞ_e ðtÞo0 is true for all t in some nontrivial semi-open subinterval thit, if the condition SðtÞSðtÞ of time of the form ½t; t hit Þ  ð 1; t hit Þ: The learning algorithm for the neural network weights wkhi ðtÞ and wij(t) should be derived in such a way that the sliding mode condition of Definition will be enforced. We choose the Lyapunov function candidate as follows: 1 1 V ðeðt ÞÞ ¼ e2 ðt Þ ¼ ðτj ðtÞ  τdj ðtÞÞ2 ¼ 0 ð9Þ 2 2 Then differentiating VðeðtÞÞ yields V_ ðeðtÞÞ ¼ e_e ¼ eð_τ j  τ_ dj Þ ð10Þ To find the time derivative of τ_ j : dτj ∂τj dwij ∂τj dwkhi ∂τj ∂τj ∂t i ∂xi ðt i Þ k ¼ þ k ¼ w_ ij þ w_ dt ∂wij dt ∂wij ∂t i ∂xi ðt i Þ ∂wkhi hi ∂whi dt The first term in Eq. (11) can be computed as   ∂ ∑i A Γ j wij t i ∂τj ¼ ¼ ti ∂wij ∂wij

ð11Þ

ð12Þ

The derivative of the neural network output with respect to the firing times of the neurons in the hidden layer yields   ∂ ∑i A Γj wij t i ∂τj ¼ ¼ wij ð13Þ ∂t i ∂t i

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

3275

Since ti is not continuous, it is not possible to calculate the derivative term ∂t i =∂xi ðt i Þ. To overcome this problem, it is assumed that for a small enough region around t ¼ t i , the function xi is approximated by a linear function of t. For such a small region, we approximate the threshold function δt i ðxi Þ ¼  δxi ðt i Þ=α, where α equals the local derivative of xi ðtÞ with respect to t. Thus, the third term on the right-hand side of Eq. (11) evaluates to  ∂t i ∂t i ðxi Þ 1 1 ¼ ¼ ð14Þ ¼ ∂x ðt Þ ∂xi ðtÞ xi ¼ θ ∂xi ðt i Þ ∂yk ðt i Þ i i ∑h A Γ i ∑k wkhi h ∂t ∂t i The last term ∂xi ðt i Þ=∂wkhi can be computed as   ∂ ∑h A Γi ∑k wkhi ykh ðt i Þ ∂xi ðt i Þ ¼ ¼ ykh ðt i Þ ∂wkhi ∂wkhi

ð15Þ

Thus, the time derivative of τj can be written as τ_ j ¼ t i w_ ij 

ykh ðt i Þwij w_ khi k ∂y ðt Þ i ∑h A Γ i ∑k wkhi h ∂t i

ð16Þ

Using Eq. (16), Eq. (10) can be written as 2 6 V_ ¼ e6 4t i w_ ij 

3

7 ykh ðt i Þwij w_ khi  τ_ dj 7 5 k ∂y ðt i Þ ∑h A Γ i ∑k wkhi h ∂t i

ð17Þ

If the learning algorithm for the weights wij and wkhi is chosen as w_ ij ¼  αt i signðeÞ ( w_ khi ¼ αykh ðt i Þwij

∑ ∑wkhi

h A Γi k

ð18Þ ∂ykh ðt i Þ ∂t i

) signðeÞ

then the derivative of τj will be ykh ðt i Þ2 w2ij

ð19Þ (

∂yk ðt i Þ ∑ ∑wkhi h τ_ j ¼  αðt i Þ2 signðeÞ α k ∂t i ∂y ðt i Þ h A Γ i k ∑h A Γi ∑k wkhi h ∂t i ¼  αðt i Þ2 signðeÞ αykh ðt i Þ2 w2ij signðeÞ

) signðeÞ ð20Þ

We consider that the desired spike time of the output neuron is constant, i.e. τ_ dj ¼ 0. For the selected weight update rule, the derivative of the Lyapunov candidate function yields V_ ¼ e½ αðt i Þ2 signðeÞ αykh ðt i Þ2 w2ij signðeÞ ¼  αðt i Þ2 jej  αykh ðt i Þ2 w2ij jej ¼  αjej½ðt i Þ2 þ ykh ðt i Þ2 w2ij  This implies V_ o0; 8 e

ð21Þ ð22Þ

3276

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

The design steps of the two layered SNN are given in Algorithm 1. Algorithm 1. The algorithm for two layered SNN. Setup the network; Initialize weights wkhi and wij randomly within the interval ½0; 1; for each input neuron h do Convert the sensor signal to the spike time t h using Eq. (23); end for Set simulation time t¼ 0; for each neuron i in the hidden layer do while tomax_steps do Calculate xi(t) using Eqs. (1) and (2); if xi ðtÞoθ then Set t ¼ t þ time_step; else Set the time instance t, where the membrane potential xi(t) crosses the threshold value, as the firing time of neuron i: t i ¼ t; end if if all neurons in the hidden layer fire then Set t ¼ max_steps; end if end while end for for each neuron j in the output layer do Compute output τj ∑i A Γ j wij t i ; end for Set error e ¼ ∑j A J ðτj  τdj Þ; for each neuron h in the input layer do for each neuron i in the hidden layer do for each synapse k do Calculate w_ khi using Eq. (19); Set the new weights wkhi ¼ wkhi þ w_ khi ; end for end for end for for each neuron i in the hidden layer do for each neuron j in the output layer do Calculate w_ ij using Eq. (18); Set the new weights wij ¼ wij þ w_ ij ; end for end for Save parameters of SNN.

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

3277

4. Encoding continuous inputs into spike times As mentioned earlier in this paper, the inputs and the outputs of SNNs are described by the spike times. To be able to use SNNs in control engineering applications, some methods should be developed for the transformation between analog signals and spike times. The three main approaches to encode information into spikes are rate coding, population coding and delay coding [51]. In the rate coding, the sensor analog value is mapped to the firing rate of the neuron, whereas the precise timing of the spikes is not distinctive (e.g. two neurons with different firing frequencies can both fire a spike at the time instant t ¼ t a ). In the population coding, the sensor signal is distributed over a population of neurons instead of a single neuron. The entire input space is covered with overlapping activation functions, which are usually Gaussian receptive fields. For each input value, a number of neurons produce nonzero output value and neurons with the highest value fire earlier, whereas smaller values result in late firing times. In this study delay coding scheme is utilized, in which weaker signals are associated with ‘late’ firing times and higher signals are related to ‘early’ firing times. This method has been used in many studies owing to its simplicity and reduced computational burden [52–54]. It arises from the fact that a spiking neuron receiving weaker input signals tends to fire a spike later than the same neuron receiving higher input signals. To convert input values into the spike times to be entered to the SNN, the following formula is used:   ðx  xmin Þðt max  t min Þ t j ðxÞ ¼ t max  round t min þ ð23Þ xmax  xmin where ti(x), tmin and tmax are the current, the minimum and the maximum spike times, and x, xmin and xmax are the current, the minimum and the maximum values of the input variables, respectively. Minimum spike time describes the minimum time required for the neuron to be activated, whereas maximum spike time indicates the time, after which the neuron cannot fire a spike. With the use of the modified neural network structure, in which the output layer is considered to have linear activation function, the need for decoding to converts spike times into real numbers has been eliminate resulting in a reduction in the computational burden. 5. Experimental results To assess the performance of the SNN for the proposed SMC-based learning algorithm, experimental studies are carried on a real time laboratory servo system. The mechanical setup produced by the Amira Company, Duisburg, Germany, consists of two DC motors with one tacho generator and one incremental encoder (see Fig. 2). The motors are placed facing each other and their drive shafts are rigidly coupled by a mechanical clutch. The first motor contains a tacho generator and is used for the control of the rotation speed or the shaft angle. The second

Fig. 2. A servo system setup.

3278

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

motor is operating as a generator to apply a load torque to the first motor. The essential feature of this setup is that it enables the generation of nonlinear load conditions to provide the means for comparing the performance of various controllers in the presence of nonlinear load disturbances. 5.1. System identification experiments The structure of an identifier is presented in Fig. 3. The inputs of the identifier consist of the present values of the external input signals as well as their di step delayed values, i.e. u½k; u½k  1; u½k  2; …; u½k  di , and do step delayed outputs of the plant, i.e. y½k  1; y½k  2; …; y½k  d o  . The main objective of the identifier is to determine the proper weight values which will result in the generation of an output with the least possible deviation from the output of the system for all input values u(k). The identifier used in this study has two inputs. These are the one-step delayed value of the input signal along with the one-step delayed system output. The initial weights of the synapses are set randomly within the interval ½0; 1. Using the developed sliding mode based learning algorithm, these weights are adapted for a chirp excitation signal shown in Fig. 4. The amplitude of the chirp signal is 3 V and its initial frequency is 0.1 Hz whereas its frequency becomes 1 Hz at the end of the 20th second. With the use of a chirp signal as the excitation signal, the notion of richness in excitation is satisfied for better identification results. The applied signal has both negative and positive values, which requires the motor to rotate in both directions.

Fig. 3. Identification scheme.

3

Chirp Signal (V)

2 1 0 −1 −2 −3

0

2

4

6

8

10

12

14

Time (s)

Fig. 4. Chirp excitation signal.

16

18

20

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

3279

The SNN used as the identifier has a single hidden layer with seven spiking neurons on it. It is assumed that there are four synapses for each connection among the neurons. The time decay constant and the threshold value are selected as 8 ms and 2, respectively. The sampling time is 10 ms. The results presented in Fig. 5 indicate that the identifier can approach the plant output very fast, indicating correct modeling. As a performance criterion the root-mean-square-error (RMSE) defined in Eq. (24) is used: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 K d RMSE ¼ ∑ ðx  xi Þ2 ð24Þ Ki¼1 i where K represents the total number of the samples, which is in this case 2000. The corresponding RMSE value for the training part is 0.35. After the learning of the parameters, in the testing part, the signal uðtÞ ¼ 2 þ sin ð2πtÞ is applied as an input to the system. The experiment is run for 10 s resulting in 1000 data points. The corresponding RMSE value is 0.22 (Fig. 6). 10 8

Tachometer Output (V)

6 4 2 0 −2 −4 −6 −8 −10

0

2

4

6

8

10

12

14

16

18

20

Time (s)

Fig. 5. SNN identifier output vs. plant output (training part). 10 9

Tachometer Output (V)

8 7 6 5 4 3 2 1 0

0

1

2

3

4

5

6

7

8

9

Time (s)

Fig. 6. SNN identifier output vs. plant output (testing part).

10

3280

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

5.2. Control performance studies In order to make an in-depth comparison of the effectiveness of the gradient-based and VSSbased learning algorithms for SNNs, the parameter update rules for the former are also derived. The resulting update rules are as follows: Δwij ¼  ηðτj  τdj Þt i Δwkhi

¼

ð25Þ

  ηðτj  τdj Þ ∑i A Γj wij ykh ðt i Þ ∑h A Γi ∑k wkhi

ð26Þ

∂ykh ðt i Þ ∂t i

where η is the learning constant. The two layered SNN is composed of two input neurons, seven hidden layer neurons and one output neuron. The inputs are the error and the change of error. It is considered that the output neuron has a linear activation function. 4 synapses are used for each connection between the input layer and the hidden layer resulting in 63 parameters to be updated. The connections have a delay interval of 3 ms; hence the available synaptic delays are from 1 to 4 ms. The sampling time is set to 10 ms for all the experiments. The decay time constant is selected as τ ¼ 6 ms and the threshold value is set to θ ¼ 3. The initial value for the learning rate is selected as 0.0005 by trialand-error method considering the general facts that a small value of the learning rate results in a slow convergence whereas a higher value of the learning rate may result in oscillations. Fig. 7 presents the speed responses of the motor with SNN controller for different set-point signals. The set point signal has been suddenly changed from 7 to 8 at the 4th second and then suddenly becomes 9 at the 7th second. It can be inferred that SNN controller is capable of adapting its weights to regulate the system to changing set points. In order to determine the efficiency and the accuracy of the proposed controllers, three different types of load conditions are considered. The speed responses of the motor with SNN controller for step changing load condition are given in Fig. 8. The corresponding load torque starts with a value of 1, and increases suddenly to 2 on 4th second, and then comes back to 1 on 7th second. 10 9

Tachometer Output (V)

8 7 6 5 4 3 2 1 0

0

1

2

3

4

5

6

7

8

9

Time (s)

Fig. 7. Speed response characteristic of SNN controller.

10

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

3281

10 9

Tachometer Output (V)

8 7 6 5 4 3 2 1 0

0

1

2

3

4

5

6

7

8

9

10

Time (s)

Fig. 8. Speed response of the motor for step changing load.

10 9

Tachometer Output (V)

8 7 6 5 4 3 2 1 0

0

1

2

3

4

5

6

7

8

9

10

Time (s)

Fig. 9. Speed response of the motor for sinusoidal load condition.

Fig. 9 shows the speed response of the motor under the sinusoidal load condition, i.e. M L ðtÞ ¼ 2 þ 0:4 sin ð1:5tÞ. On the other hand, Fig. 10 shows the speed response of motor under load condition which is proportional to the square of the speed, i.e. M L ðtÞ ¼ 0:5ðTachoSpeedÞ2 . This type of load corresponds to the load-torque characteristics of centrifugal fans, pumps and blowers. In Table 1, load type 1, load type 2, and load type 3 correspond to the step change, sinusoidal, and proportional to the square of the speed of load torque, respectively. Experimental results obtained using multilayer feedforward neural networks with sigmoidal and hyperbolic tangent activation functions are also included. Similar to experimental studies carried on using Spiking Neural Networks, it is considered that these networks have seven neurons in their hidden layer and the neurons in the output layer employ linear activation function. Regarding the results presented in Table 1, it can be inferred that Spiking Neural Networks are capable of providing similar or smaller RMSE values compared to the second generation neural networks. These

3282

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285 10 9

Tachometer Output (V)

8 7 6 5 4 3 2 1 0

0

1

2

3

4

5

6

7

8

9

10

Time (s)

Fig. 10. Speed response of the motor for nonlinear load condition.

Table 1 RMSE values for different load conditions. RMSE

Changing reference

Load 1

Load 2

Load 3

MLP with sigmoid activation function MLP with hyperbolic tangent activation function SNN with gradient descent based learning SNN with sliding mode based learning

1.0166 0.9274 0.8953 0.6093

0.9420 0.8928 0.9264 0.6149

0.6985 0.6315 0.6293 0.5959

0.6170 0.5884 0.5802 0.5784

results are consistent with the statement [4] that a network consisting of spiking neurons has at least the same computational power as the neural networks of the first two generations. The experimental results also indicate that SNN controller with sliding mode-based learning algorithm can outperform SNN and MLP controllers with gradient-descent based learning algorithms for all types of load conditions. The proposed controller structure possesses shorter settling time and less perturbation from the reference value. Another advantage is that it exhibits reduced oscillations in the presence of load disturbances.

6. Conclusion The studies that demonstrate the robustness of variable structure control have motivated the use of sliding mode control approach in the training of ANNs. In this study, the development of a SNN with variable structure systems-based learning algorithm is considered for identification and control of dynamic plants. The parameter update rules are derived based on the Lyapunov stability method to achieve the stable online tuning of the neurocontroller parameters. The structure of the network is modified by replacing the spiking neurons in the output layer with the neurons having a linear activation function. Thus, the need for decoding has been eliminated resulting in a reduction in the computational time. The proposed algorithm is tested on a laboratory servo system in real time for different load conditions. The results indicate that the SNNs with VSS-based learning algorithm can yield better transient and steady-state

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

3283

characteristics. Considering that in any practical application, the training data obtained are always subjected to noise (or maybe outliers), the proposed algorithm utilizing variablestructure-systems based approach for online learning of SNNs can be effectively applied to a variety of engineering problems. The possible applications include but not limited to robot control, industrial process control, monitoring and detection of various medical phenomena, and time series prediction. Acknowledgement The authors would like to acknowledge the financial support of the Bogazici University Research Fund (Project No. 6527).

References [1] W. McCulloch, W. Pitts, A logical calculus of ideas immanent in nervous activity, Bull. Math. Biophys. 5 (1943) 115–133. [2] P.J. Werbos, Beyond regression: new tools for prediction and analysis in the behavioral sciences (Ph.D. thesis), Harvard University, 1974. [3] S. Ghosh-Dastidar, H. Adeli, Spiking neural networks, Int. J. Neural Syst. 19 (2009) 295–308. [4] W. Maass, Fast sigmoidal networks via spiking neurons, Neural Comput. 9 (1997) 279–304. [5] R. VanRullen, R. Guyonneau, S.J. Thorpe, Spike times make sense, Trends Neurosci. 28 (2005) 1–4. [6] S. Bohte, J. Kok, J.L. Poutre, Errorbackpropagation in temporally encoded networks of spiking neurons, Neurocomputing 48 (2002) 17–37. [7] S. Ghosh-Dastidara, H. Adeli, Improved spiking neural networks for eeg classification and epilepsy and seizure detection, Integr. Comput.-Aided Eng. 14 (2007) 187–212. [8] A. Belatreche, L. Maguire, M. McGinnity, Advances in design and application of spiking neural networks, Soft Comput. 11 (2006) 239–248. [9] B. Chandra, K.V.N. Babu, Classification of gene expression data using spiking wavelet radial basis neural network, Expert Syst. Appl. 41 (2014) 1326–1330. [10] N. Kasabov, Evolving Spiking Neural Networks and Neurogenetic Systems for Spatio-and Spectro-temporal Data Modelling and Pattern Recognition, Springer, Berlin Heidelberg, Germany, 234–260. [11] J. Shin, D. Smith, W. Swiercz, K. Staley, J. Rickard, J. Montero, L. Kurgan, K. Cios, Recognition of partially occluded and rotated images with a network of spiking neurons, IEEE Trans. Neural Netw. 21 (2010) 1697–1709. [12] R. Batllori, C. Laramee, W. Land, J. Schaffer, Evolving spiking neural networks for robot control, Procedia Comput. Sci. 6 (2011) 329–334. [13] A. Bouganis, M. Shanahan, Training a spiking neural network to control a 4-dof robotic arm based on spike timingdependent plasticity, in: Proceedings of the 2010 International Joint Conference on Neural Networks, Barcelona, Spain, 2010, pp. 1–8. [14] R. Abiyev, O.Kaynak, Y. Oniz, Supervised learning with spiking neural networks, in: Proceedings of 2012 IEEE/ ASME International Conference on Advanced Intelligent Mechatronics, Taiwan, 2012, pp. 1030–1035. [15] D. Gamez, A.K. Fidjeland, E. Lazdins, ispike: a spiking neural interface for the icub robot, Bioinspir. Biomim. 7 (2012) 025008. [16] F. Ponulak, Comparison of supervised learning methods for spike time coding in spiking neural networks, Int. J. Appl. Math. Comput. Sci. 16 (2006) 101–113. [17] S. Bohte, Spiking neural networks (Ph.D. thesis), ISBN 90-6734-167-3, 148 pp., 2003. [18] D. Rumelhart, G. Hinton, R. Williams, Learning representations by back-propagating errors, Nature 323 (1986) 533–536. [19] J. Xin, M. Embrechts, Supervised learning with spiking neural networks, in: Proceedings of International Joint Conference on Neural Networks, Washington, USA, 2001, pp. 1772–1777. [20] S. McKennoch, D. Liu, L. Bushnell, Fast modifications of the spikeprop algorithm, in: Proceedings of International Joint Conference on Neural Networks, Vancouver, Canada, 2006, pp. 3970–3977.

3284

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

[21] I. Sporea, A. Gruning, Supervised learning in multilayer spiking neural networks, Neural Comput. 25 (2013) 473–509. [22] K.J. Astrom, B. Wittenmark, Adaptive Control, Addison-Wesley, Boston, USA, 1995. [23] N. Sadati, R. Ghadami, Adaptive multi-model sliding mode control of robotic manipulators using soft computing, Neurocomputing 71 (2008) 2702–2710. [24] A. Topalov, O. Kaynak, Neural network modeling and control of cement mills using a variable structure systems theory based on-line learning mechanism, J. Process Control 14 (2004) 581–589. [25] E. Kayacan, O. Cigdem, O. Kaynak, Sliding mode control approach for online learning as applied to type-2 fuzzy neural networks and its experimental evaluation, IEEE Trans. Ind. Electron. 59 (2012) 3510–3520. [26] S. Ahmed, N. Shakev, A. Topalov, K. Shiev, O. Kaynak, Sliding mode incremental learning algorithm for interval type-2 Takagi–Sugeno–Kang fuzzy neural networks, Evol. Syst. 3 (2012) 179–188. [27] R. Lian, Design of an enhanced adaptive self-organizing fuzzy sliding-mode controller for robotic systems, Expert Syst. Appl. 39 (2012) 1545–1554. [28] D. Lin, X. Wang, Observer-based decentralized fuzzy neural sliding mode control for interconnected unknown chaotic systems via network structure adaptation, Fuzzy Sets Syst. 161 (2010) 2066–2080. [29] W. Gerstner, W.M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity, Cambridge University Press, Cambridge, UK, 2002. [30] M. Wehr, G. Laurent, Odour encoding by temporal sequences of firing in oscillating neural assemblies, Nature 384 (1996) 162–166. [31] R. Johansson, I. Birznieks, First spikes in ensembles of human tactile afferents code complex spatial fingertip events, Nat. Neurosci. 7 (2004) 170–177. [32] C.-N. Ko, Wsvr-based fuzzy neural network with annealing robust algorithm for system identification, J. Frankl. Inst. 349 (2012) 1758–1780. [33] P.-H. Chou, C.-S. Chen, F.-J. Lin, Dsp-based synchronous control of dual linear motors via Sugeno type fuzzy neural network compensator, J. Frankl. Inst. 349 (2012) 792–812. [34] R. Abiyev, O. Kaynak, E. Kayacan, A type-2 fuzzy wavelet neural network for system identification and control, J. Frankl. Inst. 350 (2013) 1658–1685. [35] K. McFall, Automated design parameter selection for neural networks solving coupled partial differential equations with discontinuities, J. Frankl. Inst. 350 (2013) 300–317. [36] A. Topalov, O. Kaynak, Online learning in adaptive neurocontrol schemes with a sliding mode algorithm, IEEE Trans. Syst. Man Cybern. Part B: Cybern. 31 (2001) 445–450. [37] A. Topalov, K.-C. Kim, J.-H. Kim, B.-K. Lee, Fast genetic on-line learning algorithm for neural network and its application to temperature control, in: Proceedings of IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 1996, pp. 649–654. [38] H. Niska, T. Hiltunen, A. Karppinen, J. Ruuskanen, M. Kolehmainen, Evolving the neural network model for forecasting air pollution time series, Eng. Appl. Artif. Intell. 17 (2004) 159–167. [39] H. Chen, Z. Zeng, Deformation prediction of landslide based on improved back-propagation neural network, Cogn. Comput. (2013) 1–7. [40] A.R. Khorsand, T. Akbarzadeh, R. Mohammad, Multi-objective meta level soft computing-based evolutionary structural design, J. Frankl. Inst. 344 (2007) 595–612. [41] G. Howard, E. Gale, L. Bull, B. Costello, A. Adamatzky, Evolution of plastic learning in spiking networks via memristive connections, IEEE Trans. Evol. Comput. 16 (2012) 711–729. [42] G. Parma, B. Menezes, A. Braga, Sliding mode algorithm for training multilayer artificial neural networks, Electron. Lett. 34 (1998) 97–98. [43] Y. Shuanghe, Y. Xinghuo, M. Zhihong, A fuzzy neural network approximator with fast terminal sliding mode and its applications, Fuzzy Sets Syst. 148 (2004) 469–486. [44] C.W. Chen, P.C. Chen, W.L. Chiang, Modified intelligent genetic algorithm-based adaptive neural network control for uncertain structural systems, J. Vib. Control 19 (2013) 1333–1347. [45] X. Yu, O. Kaynak, Sliding-mode control with soft computing: a survey, IEEE Trans. Ind. Electron. 56 (2009) 3275–3285. [46] O. Kaynak, K. Erbatur, M. Ertugrul, The fusion of computationally intelligent methodologies and sliding-mode control—a survey, IEEE Trans. Ind. Electron. 48 (2001) 4–17. [47] Y. Oniz, E. Kayacan, O. Kaynak, A dynamic method to forecast the wheel slip for antilock braking system and its experimental evaluation, IEEE Trans. Syst. Man Cybern. Part B: Cybern. 39 (2009) 551–560. [48] J.J.E. Slotine, W.W. Li, Applied Nonlinear Control, vol. 199, Prentice Hall, New Jersey, 1991. [49] S.H. Zak, Systems and Control, vol. 388, Oxford University Press, New York, 2003.

Y. Oniz, O. Kaynak / Journal of the Franklin Institute 351 (2014) 3269–3285

3285

[50] V.I. Utkin, J. Guldner, J. Shi, Sliding Mode Control in Electromechanical Systems, vol. 9, CRC Press, Boca Raton, USA, 1999. [51] B. Ruf, Computing and learning with spiking neurons—theory and simulations, Unpublished doctoral dissertation, TU Graz, 1998. [52] Q. Yu, H. Tang, K.C. Tan, H. Li, Rapid feedforward computation by temporal encoding and learning with spiking neurons, Neural Netw. Learn. Syst. 24 (2013) 1539–1552. [53] H. Hagras, A. Pounds-Cornish, M. Colley, V. Callaghan, G. Clarke, Evolving spiking neural network controllers for autonomous robots, in: ICRA'04, 2004 IEEE International Conference on Robotics and Automation, 2004, Proceedings, vol. 5, 2004, pp. 4620–4626. [54] B. Fontaine, D.F. Goodman, V. Benichoux, R. Brette, Brian hears: online auditory processing using vectorization over channels, Front. Neuroinform. 5 (2011).