Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378
Contents lists available at ScienceDirect
International Journal of Electronics and Communications (AEÜ) journal homepage: www.elsevier.com/locate/aeue
Craziness based particle swarm optimization algorithm for IIR system identification problem P. Upadhyay a , R. Kar a,∗ , D. Mandal a , S.P. Ghoshal b a b
Department of ECE, NIT Durgapur, India Department of EE, NIT Durgapur, West Bengal, India
a r t i c l e
i n f o
Article history: Received 31 May 2013 Accepted 18 October 2013 Keywords: IIR adaptive filter CRPSO Evolutionary optimization techniques Mean square error
a b s t r a c t In this paper a variant of particle swarm optimization (PSO), called craziness based particle swarm optimization (CRPSO) technique is applied to the infinite impulse response (IIR) system identification problem. A modified version of PSO, called CRPSO adopts a number of random variables for having better and faster exploration and exploitation in multidimensional search space. Incorporation of craziness factor in the basic velocity expression of PSO not only brings diversity in particles but also ensures convergence to optimal solution. The proposed CRPSO based system identification approach has alleviated from the inherent drawbacks of premature convergence and stagnation, unlike real coded genetic algorithm (RGA), particle swarm optimization (PSO) and differential evolution (DE). The simulation results obtained for some well known benchmark examples justify the efficacy of the proposed system identification approach using CRPSO over RGA, PSO and DE in terms of convergence speed, unknown plant coefficients and mean square error (MSE) values produced for both the same order and reduced order models of adaptive IIR filters. © 2013 Elsevier GmbH. All rights reserved.
1. Introduction An adaptive filter behaves like a filter with the exception of iteration based coefficient values due to incorporation of adaptive algorithm to cope up with ever changing environmental condition and/or unknown system parameters. The adaptive algorithm varies the filter characteristic by manipulating or varying the filter coefficient values according to the performance criterion of the system. In most of the cases error between output signal of the unknown system and output signal of adaptive identifying filter is considered as the performance criterion and an adaptive filter algorithm works towards the minimization of error signal with the proper adjustment of the filter coefficients. Finite impulse response (FIR) and infinite impulse response (IIR) filters are the two types of digital filters [1] that can be used in adaptive filter application. But with the consideration of filter order and hardware requirement, IIR filter is usually the preferred implementation. Least mean square (LMS) technique and its variants are extensively used as efficient approaches of adaptive filtering [2,3]. All classical optimization techniques cannot possibly handle discontinuous, non-differentiable and multi modal cost functions. Different heuristic and meta-heuristic search algorithms have
∗ Corresponding author. Tel.: +91 9434788056; fax: +91 343 2547375. E-mail addresses:
[email protected] (R. Kar),
[email protected] (D. Mandal),
[email protected] (S.P. Ghoshal). 1434-8411/$ – see front matter © 2013 Elsevier GmbH. All rights reserved. http://dx.doi.org/10.1016/j.aeue.2013.10.003
successfully been implemented in such cases. Some evolutionary optimization techniques aptly used are as follows: genetic algorithm (GA) [4]; seeker optimization algorithm (SOA) [5]; cat swarm optimization (CSO) [6]; bee colony algorithm (BCA) [7,8]; gravitational search algorithm (GSA) [9]; bacterial foraging algorithm [10]; conventional PSO [11–19]; quantum behaved PSO (QPSO) [20,21]; PSO with quantum infusion (PSO-QI) [22]; adaptive inertia weight PSO (AIW-PSO) [23]. To increase the randomness by the process of mutation, a random vector is introduced in basic QPSO for the enhancement of global search ability [24]. Biological evolutionary strategy is adopted in the development of differential evolution (DE) algorithm [25]. Along with DE, wavelet mutation (WM) strategy is utilized to develop DEWM algorithm [26]. Differential cultural (DC) algorithm [27], Adaptive simulated annealing (ASA) algorithm [28] are also used for optimal filter design problem. Different adaptive filtering algorithms are considered in [29] with proper parameter values to overcome the problems of convergence to biased or local minimum solutions and slow convergence speed. In this paper, the capabilities of finding optimal result in multidimensional search space using RGA, PSO, DE and CRPSO are investigated thoroughly for the identification of the unknown IIR system with the help of optimally designed adaptive IIR filter of same order and reduced order as well. RGA is a probabilistic heuristic search optimization technique developed by Holland [30]. PSO is a swarm intelligence based algorithm developed by Eberhart et al. [31,32]. DE algorithm was first introduced by Storn and Price [33].
370
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378
It has been realized that that GA is incapable for local searching [25] in a multidimensional search space. GA, PSO and DE suffer from premature convergence and easily trapped to suboptimal solution [8,34,35]. To enhance the performance of optimization algorithm in global and local search, the authors suggest CRPSO as an alternative technique. In this paper the performances of all the optimization algorithms are analyzed with eight benchmark IIR plants and adaptive filters. Simulation results obtained with the proposed CRPSO based technique are compared to those of real coded genetic algorithm (RGA), PSO and DE to demonstrate the effectiveness and superior performance of the CRPSO for achieving the global optimal solution in terms of filter coefficients and the mean square error (MSE) of the adaptive system identification problem. The rest of the paper is organized as follows: In Section 2, mathematical expression of an adaptive IIR filter and the objective function are formulated. In Section 3, PSO and CRPSO algorithms are briefly discussed for the adaptive IIR filter design problem. In Section 4, comprehensive and demonstrative sets of results and illustrations are given to make a floor of comparative study among different algorithms. Finally, Section 5 concludes the paper.
that the outputs from both the systems match closely for the given input. In this transfer function the filter order is n and n ≥ m. In the system identification problem mean square error (MSE) of time samples is considered as the objective function, also known as error fitness function and expressed in (3).
2. Design formulation
ω = [a0 a1 . . .an b0 b1 . . .bm ]
The main task of the system identification is to vary the parameters of the adaptive IIR filter using evolutionary algorithms unless and until the filter’s output signal is matched to the output signal of unknown system when the same input signal is applied simultaneously to both, adaptive filter and unknown plant under consideration. In other words, it can be said that in the system identification, the optimization algorithm searches iteratively for the adaptive IIR filter coefficients such that the filter’s input/output relationship matches closely to that of the unknown system. The basic block diagram for system identification using adaptive IIR filter is shown in Fig. 1. This section discusses the design strategy of IIR filter. The input–output relation is governed by the following difference equation [1]: n
ak y(p − k) =
k=0
m
bk x(p − k)
(1)
k=0
where x(p)and y(p) are the filter’s input and output, respectively, and n(≥m) is the filter’s order. With the assumption of coefficient a0 = 1, the transfer function of the adaptive IIR filter is expressed as given in (2). H(z) =
m b z −k k=0 k n −k
1+
(2)
a z k=1 k
+
+
Noise
x( p) Adaptive IIR Filter Haf(z)
(3)
p=1
In dB the mean square is expressed as MSE (dB) = 10 log10 (J)
(4)
where the error signal is e(p) = d(p) − y(p); d(p) is the response of the unknown plant; y(p) is the response of the adaptive IIR filter and N is the number of samples. The main objective of any evolutionary algorithm considered in this work is to minimize the value of the error fitness J with proper adjustment of coefficient vector ω of the transfer function of the adaptive filter so that output responses of filter and plant match closely and hence error is minimized. Here T
(5)
3. Evolutionary algorithms employed Evolutionary algorithms are based on the meta-heuristics, which are characterized as stochastic, adaptive and learning in order to produce intelligent optimization schemes. Such schemes have the potential to adapt to their ever changing dynamic environment through the previously acquired knowledge. In this section PSO and CRPSO are discussed for the identification of some benchmark IIR systems. Discussions regarding RGA and DE can be found in [17,26], respectively. 3.1. Particle swarm optimization (PSO) PSO is flexible, robust, population based stochastic search algorithm with attractive features of simplicity in implementation and ability to quickly converge to a reasonably good solution. Additionally, it has the capability to handle larger search space and non-differential objective function, unlike traditional optimization methods. Eberhart et al. [31,32] developed PSO algorithm to simulate random movements of bird flocking or fish schooling. Velocity and position of each particle are modified according to (6) and (7), respectively [31]. (k+1)
(k)
= w ∗ Vi
Vi
(k)
+ C1 ∗ rand1 ∗ {pbesti (k)
In this design approach the unknown plant of transfer function Hs (z) is to be identified with the adaptive IIR filter Haf (z) in such a way so
Unknown IIR Plant Hs(z)
1 2 e (p) N N
MSE = J =
y( p)
_
+ C2 ∗ rand2 ∗ {gbesti where (k+1)
d ( p)
Plant Output
+
Fig. 1. Adaptive IIR filter for system identification.
(k)
= Si
(k)
− Si }
(6)
for Vi > Vmax for Vi < Vmin
(k+1)
+ Vi
(7)
The flowchart of the PSO algorithm [18] is presented in Fig. 2. 3.2. Craziness based particle swarm optimization (CRPSO)
e( p ) Evolutionary Optimization Algorithm
Si
Vi = Vmax = Vmin
(k)
− Si }
In order to get rid of the limitations of classical PSO already mentioned in [34,35] and because in birds’ flocking or fish schooling, a bird or a fish often changes directions suddenly, the authors have modified the conventional PSO by introducing an entirely new velocity expression (8) associated with many random numbers and a “craziness velocity” having a predefined probability of
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378
371
Create np particles
Initialize position, velocity of every particle and set the values for different control parameters
Evaluate the fitness values of particles and store the position of the particle as pbest and gbest having personal and group best fitness values
Velocity is modified according to following equation Vi ( k +1) = w ∗ Vi ( k ) + C1 ∗ rand1 ∗ { pbesti( k ) − Si( k ) } + (6) C2 ∗ rand 2 ∗ {gbesti( k ) − Si( k ) }
where
Vi = Vmax for Vi > Vmax = Vmin for Vi < Vmin
Position is modified according to equation (7)
S i( k +1) = S i( k ) + Vi ( k +1) (7) Update pbest and gbest according to the error fitness value of objective function
Maximum number of iterations?
no
yes
Stop Fig. 2. Flowchart for PSO.
craziness. This modified PSO is termed as craziness based particle swarm optimization (CRPSO) [17,18,36]. The velocity in this case can be expressed as follows: (k+1)
Vi
(k)
= r2 ∗ sign(r3 ) ∗ Vi
+ (1 − r2 ) ∗ C1 ∗ r1 ∗
+ (1 − r2 ) ∗ C2 ∗ (1 − r1 ) ∗
(k)
pbesti (k)
gbest (k) − Si
(k)
− Si
(8)
where r1 , r2 and r3 are the random parameters uniformly taken from the interval [0,1] and sign(r3 ) is a function defined as: sign(r3 ) = −1 where r3 ≤ 0.05 = 1 where r3 > 0.05
(9)
The two random parameters rand1 and rand2 of (6) are independent. If both are large, both the personal and social experiences are over used and the particle is driven too far away from the local
372
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378
optimum. If both are small, both the personal and social experiences are not used fully and the convergence speed of the optimization technique is reduced. So, instead of taking independent rand1 and rand2 , one single random number r1 is chosen so that when r1 is large (1 − r1 ) is small and vice versa. Moreover, to control the balance of global and local searches, another random parameter r2 is introduced. For birds’ flocking for food, there could be some rare cases that after the position of the particle is changed according to (7), a bird may not, due to inertia, fly towards a region at which it thinks is most promising for food. Instead, it may be leading towards a region which is in opposite direction of what it should fly in order to reach the expected promising regions. So, in the step that follows, the direction of the bird’s velocity should be reversed in order for it to fly back to the promising region. sign(r3 ) is introduced for this purpose. In birds’ flocking or fish schooling, a bird or a fish often changes directions suddenly. This is described by using a “craziness” factor and is modelled in the technique by using a craziness variable. A craziness operator is introduced in the CRPSO technique to ensure that the particle would have a predefined craziness probability to maintain the diversity of the particles. Consequently, before updating its position the velocity of the particle is crazed by (k+1)
Vi
(k+1)
= Vi
+ P(r4 ) ∗ sign(r4 ) ∗ vcraziness i
Create np particles
Initialize position, velocity of every particle and set the values for different control parameters
Evaluate the fitness values of particles and store the position of the particle as pbest and gbest having personal and group best fitness values
Velocity is modified according to equation (8) and (10)
Position is modified according to equation (7)
S i( k +1) = S i( k ) + Vi ( k +1) (7)
Update pbest and gbest according to the error fitness value of objective function
(10)
where r4 is a random parameter which is chosen uniformly within the interval [0,1]; vcraziness is a random parameter which is unii
, vmax ]; and P(r4 ) and sign(r4 ) formly chosen from the interval [vmin i i are defined, respectively, as P(r4 ) = 1 when r4 ≤ Pcr = 0 when r4 > Pcr
Maximum number of iterations?
(11)
where Pcr is a predefined probability of craziness. sign(r4 ) = −1 when r4 ≥0.5 = 1 when r4 < 0.5
no
yes Stop
(12)
Reversal of the direction of bird’s velocity should rarely occur; to achieve this, r3 ≤ 0.05 (a very low value) is chosen when sign(r3 ) will be −1 to reverse the direction. If Pcr is chosen less or, equal to 0.3, the random number r4 will have more probability to become more than Pcr , in that case, craziness factor P(r4 ) will be zero in most cases, which is actually desirable, otherwise heavy unnecessary oscillations will occur in the convergence curve near the end of the maximum iteration cycles as referred to (11). vcraziness is chosen very small (=0.0001) as shown in Table 1. r4 ≥ 0.5 or, <0.5 is chosen to introduce equal probability of direction reversal of vcraziness as referred to (10) and (12). The flowchart of CRPSO [17,18,36] is presented in Fig. 3. 4. Simulation results and discussions Extensive MATLAB simulation studies have been performed for the performance comparison of four algorithms namely, RGA, PSO, DE, and CRPSO for the unknown system identification optimization problems. The values of the control parameters used for RGA, PSO, DE, and CRPSO are given in Table 1. All optimization programmes are run in MATLAB 7.5 version on core (TM) 2 duo processor, 3.00 GHz with 2 GB RAM. The simulation studies have been carried out on eight different benchmark examples considered in other reported literatures also and for first four examples, two different cases are studied, one with same filter order and another with reduced filter order. For all cases bs and as are considered as numerator and denominator coefficients, respectively, for the same and reduced order models. In each case, fifty independent runs each of 200/300 iteration cycles
Fig. 3. Flowchart for CRPSO.
are performed for each of four algorithms along with each case study for analyzing the consistency and usefulness of the results obtained. The five best results are reported in this work. The input common to both the unknown plant and the identifying IIR filter is the randomly generated Gaussian white noise signal with zero mean and unity variance. 4.1. Example I In this example, a fifth order IIR plant is considered and is taken from [6,14]. The transfer function is shown in (13).
Hs (z) =
0.1084 + 0.5419z −1 + 1.0837z −2 + 1.0837z −3 + 0.5419z −4 + 0.1084z −5 1 + 0.9853z −1 + 0.9738z −2 + 0.3864z −3 + 0.1112z −4 + 0.0113z −5
(13)
4.1.1. Case 1 This fifth order plant Hs (z) can be modelled using fifth order IIR filter Haf (z). Hence the transfer function of the adaptive IIR filter [6,14] model is assumed as (14). Haf (z) =
b0 + b1 z −1 + b2 z −2 + b3 z −3 + b4 z −4 + b5 z −5 1 − a1 z −1 − a2 z −2 − a3 z −3 − a4 z −4 − a5 z −5
(14)
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378
373
Table 1 Control parameters of RGA, PSO, DE and CRPSO. Parameters
RGA
PSO
DE
CRPSO
Population size Iteration cycles Crossover rate Crossover Mutation rate, mutation Selection, probability , vmax C1 , C2 , vmin i i wmax , wmin Pcr , vcraziness Cr , F
120 200/300 1 Two point Crossover 0.01, Gaussian mutation Roulette, 1/3 – – – –
120 200/300 – – – – 2.05, 2.05, 0.01, 1.0 1.0, 0.4 – –
120 200/300 – – – – – – – 0.3, 0.5
120 200/300 – – – – 2.05, 2.05, 0.01, 1.0 – 0.3, 0.0001 –
4.1.2. Case 2 In this case a higher order plant is modelled by a reduced order filter. With this nexus a fifth order plant as in (13) is modelled by a fourth order IIR filter [6] given in (15).
Haf (z) =
b0 + b1 z −1 + b2 z −2 + b3 1 − a1 z −1
− a2 z −2
−3
− a3 z −3
+ b4 z −4 − a4 z −4
(15)
4.2. Example II
1 − 0.9z −1 + 0.81z −2 − 0.729z −3 1 + 0.04z −1 + 0.2775z −2 − 0.2101z −3 + 0.14z −4
(16)
4.2.1. Case 1 This fourth order plant Hs (z) can be modelled by using a fourth order IIR filter Haf (z). Hence the transfer function of the adaptive IIR filter [6,15] model is assumed as (17). Haf (z) =
b0 + b1 z −1 + b2 z −2 + b3 z −3 1 − a1 z −1 − a2 z −2 − a3 z −3 − a4 z −4
(17)
4.2.2. Case 2 In this case the fourth order plant as in (16) is modelled by a third order IIR filter [6] presented in (18). Haf (z) =
b0 + b1 z −1 + b2 z −2
1 − a1 z −1 − a2 z −2 − a3 z −3
(18)
In this example, a second order IIR plant is considered and is taken from [5,6,8,9,12,15,16,20,24,25]. The transfer function is shown in (19). Hs (z) =
1 − 1.131z −1 + 0.25z −2
Hs (z) =
−0.2 − 0.4z −1 + 0.5z −2 1 − 0.6z −1 + 0.25z −2 − 0.2z −3
(22)
Haf (z) =
b0 + b1 z −1 + b2 z −2 1 − a1 z −1 − a2 z −2 − a3 z −3
(23)
4.4.2. Case 2 In this case a higher order plant is modelled by a reduced order filter. For the situation under consideration a third order plant as in (22) is modelled by a second order IIR filter [6,22] presented in (24). b0 + b1 z −1
1 − a1 z −1 − a2 z −2
(24)
4.5. Example V In this example, a sixth order IIR plant is considered from [8,22] and the transfer function is shown in (25).
(19)
4.3.1. Case 1 This second order plant Hs (z) can be modelled using second order IIR filter Haf (z). Hence the transfer function of the adaptive IIR filter model [5,6,8,9,12,15,16,20,24,25] is assumed as (20). b1 + b2 z −1 1 + a1 z −1 + a2 z −2
(21)
4.4.1. Case 1 This third order plant Hs (z) can be modelled using third order IIR filter Haf (z). Hence the transfer function of the model [6,22,24] is assumed as (23).
Hs (z) =
Haf (z) =
b 1 + az −1
In this example, a third order IIR plant is considered from [6,22,24] and the transfer function is given in (22).
Haf (z) =
4.3. Example III
0.05 − 0.4z −1
Haf (z) =
4.4. Example IV
In this example, a fourth order IIR plant is considered from [6,15] and the transfer function is shown in (16). Hs (z) =
4.3.2. Case 2 In this case a higher order plant is modelled by a reduced order filter. For the situation under consideration a second order plant as in (19) is modelled by a first order IIR filter [5,6,8,9,12,16,20,24,25] given in (21).
(20)
1 − 0.4z −2 − 0.65z −4 + 0.26z −6 1 − 0.77z −2 − 0.8498z −4 + 0.6486z −6
(25)
This sixth order unknown plant Hs (z) can be modelled using sixth order IIR filter Haf (z). Hence the transfer function of the adaptive IIR filter model is assumed as (26). Haf (z) =
b0 + b2 z −2 + b4 z −4 + b6 z −6 1 − a2 z −2 − a4 z −4 − a6 z −6
(26)
374
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378 30
30
RGA PSO DE CRPSO
20 10
10 0 MSE (dB)
MSE (dB)
0 -10 -20
-10 -20
-30
-30
-40
-40
-50
-50
-60
-60
0
50
100
150 Iteration cyc le
200
250
300
50
100
150 Iteration cycle
200
250
300
40
4.6. Example VI In this example, a second order IIR plant is considered from [8,16] and the transfer function is shown in (27).
RGA PSO DE CRPSO
20
1 − 1.2z −1 + 0.6z −2
(27)
This second order plant Hs (z) can be modelled using second order IIR filter Haf (z). Hence the transfer function of the adaptive IIR filter model is assumed as (28). b0 1 + a1 z −1 + a2 z −2
MSE (dB)
0
1
Haf (z) =
0
Fig. 5. Algorithms’ best convergence profiles for Example I (Case 2).
Fig. 4. Algorithms’ best convergence profiles for Example I (Case 1).
Hs (z) =
RGA PSO DE CRPSO
20
-20
-40
(28)
-60
4.7. Example VII
-80
In this example, a third order IIR plant is considered from [23] and the transfer function is shown in (29).
Fig. 6. Algorithms’ best convergence profiles for Example II (Case 1).
Hs (z) =
1 (1 − 0.5z −1 )
3
(29)
This third order IIR plant is modelled by a reduced order adaptive IIR filter of second order as in (30). Haf (z) =
1 1 + a1 z −1 + a2 z −2
(30)
0
20
40
60
80 100 120 Iteration cycle
140
160
180
200
15 (Example VIII, same order), respectively. The best optimization performance in terms of achieving the lowest MSE value is always observed for CRPSO based technique in the above reported figures. The large variance and standard deviation of CRPSO means that CRPSO produces large oscillations or fluctuations in the exploration period, that means it is searching for the near-global optimal solution over a large searching space. Statistically analyzed results, render a ground of judgement of performance for four
4.8. Example VIII 20
In this example, a second order IIR plant is considered from [14,22,23] and the transfer function is shown in (31). 1.25z −1 − 0.25z −2 1 − 0.3z −1 + 0.4z −2
10
(31)
This second order IIR plant is modelled by a same order adaptive IIR filter as in (32). Haf (z) =
b1 z −1 + b2 z −2 1 + a1 z −1 + a2 z −2
(32)
Algorithm convergence characteristic shows the variation of MSE value with iteration cycle and the best performance of all algorithms for all examples are shown in Figs. 4–15 (Example I, Case 1), 5 (Example I, Case 2), 6 (Example II, Case 1), 7 (Example II, Case 2), 8 (Example III, Case 1), 9 (Example III, Case 2), 10 (Example IV, Case 1), 11 (Example IV, Case 2), 12 (Example V, same order), 13 (Example VI, same order), 14 (Example VII, reduced order) and
5 0 MSE (dB)
Hs (z) =
RGA PSO DE CRPSO
15
-5 -10 -15 -20 -25 -30
0
20
40
60
80 100 120 Iteration cycle
140
160
180
200
Fig. 7. Algorithms’ best convergence profiles for Example II (Case 2).
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378 10
375
20 RGA PSO DE CRPSO
0
RGA PSO DE CRPSO
10
-10
MSE (dB)
MSE (dB)
0 -20
-30
-40
-10
-20
-50
-30 -60
0
20
40
60
80 100 120 Iteration cycle
140
160
180
200
-40
0
50
100
Fig. 8. Algorithms’ best convergence profiles for Example III (Case 1).
150 Iteration cycle
200
250
300
Fig. 11. Algorithms’ best convergence profiles for Example IV (Case 2). 15 RGA PSO DE CRPSO
10 5
10 0 -10 -20
-5
-30 MSE (dB)
MSE (dB)
0
-10
-40
-15
-50
-20
-60
-25
RGA PSO DE CRPSO
-70 0
20
40
60
80 100 120 Iteration cy cle
140
160
180
200
-80 -90
Fig. 9. Algorithms’ best convergence profiles for Example III (Case 2).
50
100
150 Iteration cycle
200
250
300
Fig. 12. Algorithms’ best convergence profiles for Example V, same order. 50 RGA PSO DE CRPSO
0
-50 MSE (dB)
optimization techniques under consideration and are presented in Table 2 (Example I, Case 1), Table 3 (Example I, Case 2), Table 4 (Example II, Case 1), Table 5 (Example II, Case 2), Table 6 (Example III, Case 1), Table 7 (Example III, Case 2), Table 8 (Example IV, Case 1), Table 9 (Example IV, Case 2), Table 10 (Example V, same order), Table 11 (Example VI, same order), Table 12 (Example VII, reduced order) and Table 13 (Example VIII, same order). It is shown in these tables that CRPSO provides the best lowest values of MSE in dB in all cases (−53.0303, −52.1223, −67.3229, −26.7778, −59.6082, −21.8046, −56.7599, −35.3765, −85.4755, −198.2750, −42.2988 and −138.0980), respectively.
0
-100
20 RGA PSO DE CRPSO
10 0
-150
-200 MSE (dB)
-10 -20
0
100
150 Iteration cycle
200
250
300
Fig. 13. Algorithms’ best convergence profiles for Example VI, same order.
-30
Table 2 Statistical analysis of MSE (dB) values for Example I (Case 1).
-40
MSE statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−15.1286 −10.0393 −13.0305 3.1409 1.7723
−24.5593 −15.5752 −19.8403 9.8419 3.1372
−31.6229 −26.5758 −28.9641 3.0622 1.7499
−53.0303 −49.4800 −51.8867 1.6007 1.2652
-50 -60
50
0
20
40
60
80 100 120 Iteration cy cle
140
160
180
200
Fig. 10. Algorithms’ best convergence profiles for Example IV (Case 1).
376
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378 Table 6 Statistical analysis of MSE (dB) values for Example III (Case 1).
40 RGA PSO DE CRPSO
30 20
MSE (dB)
10 0 -10
MSE Statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−18.5699 −11.349 −13.8559 6.0642 2.4626
−28.8606 −22.6761 −25.5896 6.3817 2.5262
−40.8449 −30.4444 −36.0153 15.6950 3.9617
−59.6082 −49.0556 −51.6582 15.98774 3.998468
Table 7 Statistical analysis of MSE (dB) values for Example III (Case 2).
-20 -30 -40 -50
0
20
40
60
80 100 120 Iteration cycle
140
160
180
MSE statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−5.6288 −2.0343 −4.0145 1.4592 1.2080
−6.9443 −6.1654 −6.3897 0.0826 0.2874
−13.5754 −10.2000 −11.7002 1.2763 1.1297
−21.8046 −20.8619 −21.1702 0.1120 0.3346
200
Fig. 14. Algorithms’ best convergence profiles for Example VII, reduced order. 20 0 -20
Table 8 Statistical analysis of MSE (dB) values for Example IV (Case 1). MSE statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−7.3025 −3.6947 −5.3754 1.5230 1.2341
−13.8934 −12.6043 −13.4570 0.2155 0.4642
−21.9382 −20.7572 −21.2907 0.1594 0.3993
−56.7599 −50.3102 −53.9672 6.4283 2.5354
MSE (dB)
-40
Table 9 Statistical analysis of MSE (dB) values for Example IV (Case 2).
-60 -80 -100
RGA PSO DE CRPSO
-120 -140
0
20
40
60
80 100 120 Iteration cycle
140
160
180
Table 3 Statistical analysis of MSE (dB) values for Example I (Case 2).
Best Worst Mean Variance Standard deviation
RGA −9.6377 −6.2069 −7.9929 1.8156 1.3474
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−6.4340 −4.7677 −5.4099 0.5378 0.7333
−14.9214 −11.2668 −13.1467 1.4176 1.1906
−23.0103 −21.2494 −22.2857 0.5733 0.7572
−35.3765 −31.8818 −34.0597 1.5141 1.2305
200
Fig. 15. Algorithms’ best convergence profiles for Example VIII, same order.
MSE statistics
MSE statistics
PSO
DE
CRPSO
−18.9620 −11.6877 −15.5403 7.9764 2.8242
−25.6864 −24.0894 −25.0301 0.4639 0.6811
−52.1223 −51.3339 −51.8283 0.0718 0.2680
Table 4 Statistical analysis of MSE (dB) values for Example II (Case 1). MSE Statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−5.8838 −2.7959 −4.2623 1.0449 1.0222
−16.5561 −11.6241 −14.2717 2.9646 1.7218
−25.8503 −21.4267 −23.3114 2.4834 1.5759
−67.3229 −47.6013 −57.1263 48.7024 6.9787
Table 5 Statistical analysis of MSE (dB) values for Example II (Case 2).
Table 10 Statistical analysis of MSE (dB) values for Example V, same order. MSE Statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−9.8970 −5.4182 −7.6586 2.6672 1.6331
−15.4363 −11.9246 −13.1824 1.6891 1.2997
−23.1876 −16.2525 −19.3509 6.0763 2.4650
−85.4755 −77.4703 −80.2171 7.5312 2.7443
Results governing the performance of CRPSO are compared with other reported results for the examples cited in this paper for IIR system identification problem and shown in Table 14. For Example I, Krusinski et al. suggested PSO algorithm for Case 1 model and the best MSE level of −35 dB is reported in [14]. Panda et al. in [6] proposed CSO for Case 1 and Case 2 models with the best MSE levels of 6.35514e−5 and 6.9475e−5, respectively. In this paper, CRPSO results in lesser MSE values of −53.0303 dB (4.9770e−06) and −52.1223 dB (6.1343e−06) for Case 1 and Case 2 models, respectively. Table 11 Statistical analysis of MSE (dB) for Example VI, same order.
MSE statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−3.5399 −1.5758 −2.7988 0.4680 0.6841
−7.3969 −6.1279 −6.9259 0.2018 0.4492
−12.7084 −11.2960 −11.8712 0.2728 0.5223
−26.7778 −23.9794 −25.6263 1.0899 1.0440
MSE statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−14.9894 −10.2919 −12.8411 3.3394 1.8274
−26.3827 −24.2022 −25.2512 0.9578 0.9787
−44.7719 −40.6673 −42.1392 2.3506 1.5332
−198.2750 −194.2840 −195.3350 2.2141 1.4880
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378 Table 12 Statistical analysis of MSE (dB) values for Example VII, reduced order. MSE statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−7.3166 −3.3573 −5.6671 2.7681 1.6638
−14.7756 −13.4199 −14.0680 0.3387 0.5820
−24.9485 −23.7675 −24.1550 0.1873 0.4328
−42.2988 −39.2010 −40.5645 1.5006 1.2250
Table 13 Statistical analysis of MSE (dB) values for Example VIII, same order. MSE statistics
RGA
PSO
DE
CRPSO
Best Worst Mean Variance Standard deviation
−13.4679 −10.6702 −11.8941 1.1647 1.0792
−29.2082 −20.9691 −23.5018 9.1115 3.0185
−35.6996 −30.7407 −33.1344 3.2030 1.7897
−138.0980 −133.1080 −136.1450 2.7777 1.6666
For Example II, Majhi et al. suggested PSO technique [15] for Case 1 model with MSE value of −38 dB. Panda et al. [6] achieved MSE values of 5.94209e−5 and 0.006705056 for Case 1 and Case 2 models, respectively, with CSO technique. CRPSO yields lesser MSE levels of −67.3229 dB (1.8523e−07) and −26.7778 dB (0.0021), for Case 1 and Case 2 models, respectively, as reported in Table 14. Chen and Luk used Case 2 model for Example III with PSO and a MSE value of 0.275 is reported in [12]. Majhi et al. in [15] applied PSO and a MSE level of −38 dB is achieved for Case 1 model. PSO algorithm is applied for Case 2 model by Durmus and Gun [16] and a MSE level of 0.015 is reported. In [20], Fang et al. proposed QPSO for Case 2 model and the best MSE value of 0.173 is reported in [20].
377
Again Fang et al. suggested MuQPSO for Case 2 model and MSE of 0.206 is reported in [24]. In [25], Karaboga has applied DE algorithm and MSE level of 0.0685 for Case 2 model. In [8] Karaboga also suggested ABC optimization technique for Case 2 model and the best MSE level of 0.0706 is reported. Rashedi et al. suggested GSA technique for Case 2 model with MSE level of 0.172 in [9]. CSO technique is applied by Panda et al. in [6] for Case 1 and Case 2 models with reported MSE levels of 6.36395e−5 and 0.0175154, respectively. In [5] Dai et al. suggested SOA technique for Case 2 model and best MSE level of 8.2773e−2 is reported in [5]. In this paper CRPSO algorithm results in the least MSE levels of −59.6082 dB (1.0944e−06) and −21.8046 dB (0.0066) for the Case 1 and Case 2 models, respectively, as reported in Table 14. For Example IV, Panda et al. suggested CSO technique [6] for Case 1 and Case 2 models with MSE values of 6.35201e−5 and 0.001393846, respectively. Luitel and Venayagamoorthy also suggested MSE values of 7.791e−4 and 0.004 for Case 1 and Case 2 models, respectively; with PSO-QI technique as reported in [22]. Fang et al. in [24], suggested MuQPSO for Case 1 model with the best MSE level of 2.041e−3. CRPSO algorithm yields the least MSE levels of −56.7599 dB (2.1087e−06) and −35.3765 dB (2.8997e−04) for Case 1 and Case 2 models, respectively, as shown in Table 14. For Example V, Karaboga suggested ABC algorithm for Case 2 model and the best MSE level of 0.0144 is reported in [8]. Luitel and Venayagamoorthy [22] proposed PSO-QI for Case 1 and Case 2 models with the best MSE levels of 7.984e−4 and 0.001, respectively. In this paper, CRPSO results in the least MSE value of −85.4755 dB (2.8343e−09) for Case 1 model. This is also shown in Table 14. For Example VI, Durmus and Gun suggested PSO technique for Case 1 model and the best MSE of 1.33e−14 is reported in
Table 14 Performance comparison of different reported MSE values. Example
Reference
Proposed algorithm
MSE value Same order (Case 1)
Reduced order (Case 2)
Example I
Krusinski and Jenkins [14] Panda et al. [6] Present work Majhi et al. [15] Panda et al. [6] Present work Chen and Luk [12] Majhi et al. [15] Durmus and Gun [16] Fang et al. [20] Fang et al. [24] Karaboga [25] Karaboga [8] Rashedi et al. [9] Panda et al. [6] Dai et al. [5] Present work Panda et al. [6] Luitel and Venayagamoorthy [22] Fang et al. [24] Present work Karaboga [8] Luitel and Venayagamoorthy [22] Present work Durmus and Gun [16] Karaboga [8] Present work Yu et al. [23] Present work Krusinski and Jenkins [14] Yu et al. [23] Luitel and Venayagamoorthy [22] Krusinski et al. [13] Present work
PSO CSO CRPSO PSO CSO CRPSO PSO PSO PSO QPSO MuQPSO DE ABC GSA CSO SOA CRPSO CSO PSO-QI MuQPSO CRPSO ABC PSO-QI CRPSO PSO ABC CRPSO AIWPSO CRPSO PSO AIWPSO PSO-QI MPSO CRPSO
−35 dB 6.35514e−5 4.9770e−6 (=−53.0303 dB) −38 dB 5.94209e−5 1.8523e−7 (=−67.3229 dB) NR −38 dB NR NR NR NR NR NR 6.36395e−5 NR 1.0944e−6 (=−59.6082 dB) 6.35201e−5 7.791e−4 2.041e−3 2.1087e−6 (=−56.7599 dB) NR 7.984e−4 2.8343e−9 (=−85.4755 dB) 1.33e−14 5.1410e−16 1.4878e−20 (=−198.2750 dB) NR NR −39 dB −59 dB 7.102e−4 130 dB 1.5497e−14 (=−138.0980 dB)
NR 6.9475e−5 6.1343e−6 (=−52.1223 dB) NR 0.006705056 0.0021 (=−26.7778 dB) 0.275 NR 0.015 0.173 0.206 0.0685 0.0706 0.172 0.0175154 8.2773e−2 0.0066 (=−21.8046 dB) 0.001393846 0.004 NR 2.8997e−4 (=−35.3765 dB) NR NR NR NR NR NR −32 dB 5.8900e−5 (=−42.2988 dB) NR NR NR NR NR
Example II
Example III
Example IV
Example V
Example VI
Example VII Example VIII
NR: not reported in the refereed literature.
378
P. Upadhyay et al. / Int. J. Electron. Commun. (AEÜ) 68 (2014) 369–378
[16]. Karaboga in [8] suggested ABC for Case 1 model with best MSE level of 5.1410e−16. CRPSO gives with the best MSE value of −198.2750 dB (1.4878e−020) for Case 1 model, as reported in this paper. For Example VII, Yu et al. suggested AIWPSO algorithm for Case 2 model with best MSE value −32 dB is reported in [23]. In this paper, CRPSO gives the best MSE value of −42.2988 dB (5.8900e−05) for Case 2 model. For Example VIII, Krusinski and Jenkins suggested PSO technique [14] for Case 1 model with MSE value of −39 dB. Yu et al. also suggested MSE values of −59 dB for Case 1 model with AIWPSO technique as reported in [23]. Luitel and Venayagamoorthy [22] suggested PSO-QI for Case 1 and Case 2 models with the best MSE levels of 7.102e−4 and 0.006, respectively. CRPSO algorithm results in the least MSE level of −138.0980 dB (1.5497e−14) for Case 1 model. All information given above for the comparative study are presented in Table 14. 5. Conclusions In this paper, the proposed CRPSO algorithm is adopted for finding optimal sets of coefficients of identifying IIR filters for both the same order and reduced order models in the unknown system identification problem. The adoption of craziness factor brings a noticeable improvement in mimicking the unknown plant in terms of producing minimum error fitness values and optimal algorithm convergence profiles. No doubt, the complexity of the basic PSO algorithm is increased with some modifications incorporated in CRPSO, which have resulted in higher computation time for finding optimal solution but the advantages obtained in terms of quality output, have outweighed the disadvantage encountered with the algorithm complexity. So, from the simulation study it is established that the proposed optimization technique CRPSO adopted for the system identification is efficient in finding optimal solution in multidimensional search space where the rest algorithms are entrapped to suboptimal solutions. Hence it can be concluded that the proposed technique would be good enough to handle any unknown system identification problem in future research. References [1] Proakis JG, Manolakis DG. Digital Signal Processing: Principles, Algorithms and Applications. 4th ed. Pearson Education; 2007. [2] Guan X, Chen X, Wu G. QX-LMS adaptive FIR filters for system identification. In: IEEE 2nd International Congress on Image and Signal Processing, CISP’09. 2009. p. 1–5. [3] Shengkui Z, Zhihong M, Suiyang K. A fast variable step size LMS algorithm with system identification. In: IEEE 2nd Conf. on Industrial Electronics and Applications, ICIEA 2007. 2007. p. 2340–5. [4] Ma Q, Cowan CFN. Genetic algorithms applied to the adaptation of IIR filters. Signal Process 1996;48(2):155–63. [5] Dai C, Chen W, Zhu Y. Seeker optimization algorithm for digital IIR filter design. IEEE Trans Ind Electron 2010;57(May (5)). [6] Panda G, Pradhan PM, Majhi B. IIR system identification using cat swarm optimization. Exp Syst Appl 2011;38(10):12671–83. [7] Karaboga N, Cetinkaya MH. A novel and efficient algorithm for adaptive filtering: artificial bee colony algorithm. Turk J Elec Eng Comp Sci 2011;19(1): 175–90. [8] Karaboga N. A new design method based on artificial bee colony algorithm for digital IIR filters. J Franklin Inst 2009;346:328–48. [9] Rashedi E, Nezamabadi H, Saryazdi S. Filter modelling using gravitational search algorithm. Eng Appl Artif Intell 2011;24:117–22.
[10] Majhi B, Panda G. Bacterial foraging based identification of nonlinear dynamic system. In: IEEE Congress on Evolutionary Computation, CEC 2007. 2007. p. 1636–41. [11] Panda G, Mohanty D, Majhi B, Sahoo G. Identification of nonlinear systems using particle swarm optimization technique. In: IEEE Congress on Evolutionary Computation. 2007. p. 3253–7. [12] Chen S, Luk BL. Digital IIR filter design using particle swarm optimization. Int J Modell Identification Control 2010;9(4):327–35. [13] Krusienski DJ, Jenkins WK. Adaptive filtering via particle swarm optimization. In: Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers, vol. 1. 2003 November. p. 571–5. [14] Krusienski DJ, Jenkins WK. Particle swarm optimization for adaptive IIR filter structure. In: IEEE Congress on Evolutionary Computation, CEC 2004, vol. 1. 2004. p. 965–70. [15] Majhi B, Panda G, Choubey A. Efficient scheme of pole-zero system identification using particle swarm optimization technique. In: IEEE Congress on Evolutionary Computation, CEC 2008. 2008. p. 446–51. [16] Durmus B, Gun A. Parameter identification using particle swarm optimization. In: 6th International Advanced Technologies Symposium, IATS’11. 2011. p. 188–92. [17] Mandal S, Ghoshal SP, Kar R, Mandal D. Design of optimal linear phase FIR high pass filter using craziness based particle swarm optimization technique. J King Saud Univ – Comp Inf Sci 2012;24:83–92. [18] Mandal S, Ghoshal SP, Kar R, Mandal D. Optimal linear phase FIR band pass filter design using craziness based particle swarm optimization algorithm. J Shanghai Jiaotong Univ (Science) 2011;16(6):696–703. [19] Pan ST, Chang CY. Particle swarm optimization on D-stable IIR filter design. In: IEEE World Congress on Intelligent Control Automation, WCICA’11. 2011. p. 621–6. [20] Fang W, Sun J, Xu W. Analysis of adaptive IIR filter design based on quantum behaved particle swarm optimization. In: 6th IEEE World Congress on Intelligent Control and Automation. 2006. p. 3396–400. [21] Sun J, Fang W, Xu W. A quantum-behaved particle swarm optimization with diversity-guided mutation for the design of two-dimensional IIR digital filters. IEEE Trans Circ Syst II 2010;57(February (2)):141–5. [22] Luitel B, Venayagamoorthy GK. Particle swarm optimization with quantum infusion for system identification. Eng Appl Artif Intell 2010;23:635–49. [23] Yu X, Liu J, Li H. An adaptive inertia weight particle swarm optimization algorithm for IIR digital filter. In: IEEE International Conference on Artificial and Computational Intelligence. 2009. p. 114–8. [24] Fang W, Sun J, Xu W. A new mutated quantum behaved particle swarm optimizer for digital IIR filter design. EURASIP J Adv Signal Process 2009;2009:1–7 (Article ID 367465). [25] Karaboga N. Digital IIR filter design using differential evolution algorithm. EURASIP J Appl Signal Process 2005;2005:1269–76 (Hindawi Publishing Corp., Article ID 8). [26] Mandal S, Ghoshal SP, Kar R, Mandal D. Differential evolution with wavelet mutation in digital FIR filter design. J Opt Theory Appl 2012;155(October (1)):315–24. [27] Gao H, Diao M. Differential cultural algorithm for digital filters design. In: IEEE 2nd International Conference on Computer Modeling and Simulation. 2010. p. 459–63. [28] Chen S. IIR model identification using batch-recursive adaptive simulated annealing algorithm. In: Proceedings of the 6th Annual Chinese Automation and Computer Science Conference. 2000. p. 151. [29] Netto SL, Diniz PSR, Agathoklis P. Adaptive IIR filtering algorithms for system identification: a general framework. IEEE Trans Educ 1995;38(1):54–66. [30] Holland JH. Adaptation in Natural and Artificial Systems. Ann Arbor, MI: Univ. Michigan Press; 1975. [31] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Network, vol. 4. 1995. p. 1942–8. [32] Eberhart R, Shi Y. Comparison between genetic algorithm and particle swarm optimization. In: Evolutionary Programming VII. 1998. p. 611–6. [33] Storn R, Price K. Differential evolution – a simple and efficient adaptive scheme for global optimization over continuous spaces. In: Technical Report, International Computer science Institute. 1995. [34] Ling SH, Iu HHC, Leung FHF, Chan KY. Improved hybrid particle swarm optimized wavelet neural network for modeling the development of fluid dispensing for electronic packaging. IEEE Trans Ind Electron 2008;55(9):3447–60. [35] Biswal B, Dash PK, Panigrahi BK. Power quality disturbance classification using fuzzy C-means algorithm and adaptive particle swarm optimization. IEEE Trans Ind Electron 2009;56(1):212–20. [36] Mandal D, Ghoshal SP, Bhattacharjee AK. Radiation pattern optimization for concentric circular antenna array with central element feeding using craziness based particle swarm optimization. Int J RF Microw Comp Aided Eng 2010;20(September (5)):577–86.