Implementation of hybrid ANN–PSO algorithm on FPGA for harmonic estimation

Implementation of hybrid ANN–PSO algorithm on FPGA for harmonic estimation

Engineering Applications of Artificial Intelligence 25 (2012) 476–483 Contents lists available at SciVerse ScienceDirect Engineering Applications of ...

620KB Sizes 0 Downloads 18 Views

Engineering Applications of Artificial Intelligence 25 (2012) 476–483

Contents lists available at SciVerse ScienceDirect

Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai

Implementation of hybrid ANN–PSO algorithm on FPGA for harmonic estimation B. Vasumathi n, S. Moorthi Department of Electrical and Electronics Engineering, National Institute of Technology, Tiruchirappalli 15, Tamilnadu, India

a r t i c l e i n f o

a b s t r a c t

Article history: Received 30 July 2009 Received in revised form 16 September 2011 Accepted 23 December 2011 Available online 17 January 2012

Harmonic estimation is the main process in active filters for harmonic reduction. A hybrid Adaptive Neural Network–Particle Swarm Optimization (ANN–PSO) algorithm is being proposed for harmonic isolation. Originally Fourier Transformation is used to analyze a distorted wave. In order to improve the convergence rate and processing speed an Adaptive Neural Network Algorithm called Adaline has then been used. A further improvement has been provided to reduce the error and increase the fineness of harmonic isolation by combining PSO algorithm with Adaline algorithm. The inertia weight factor of PSO is combined along with the weight factor of Adaline and trained in Neural Network environment for better results. ANN–PSO provides uniform convergence with the convergence rate comparable that of Adaline algorithm. The proposed ANN–PSO algorithm is implemented on an FPGA. To validate the performance of ANN–PSO; results are compared with Adaline algorithm and presented herein. & 2012 Elsevier Ltd. All rights reserved.

Keywords: Adaline Adaptive Neural Network (ANN) Particle Swarm Optimization (PSO) Adaptive Neural Network–Particle Swarm Optimization (ANN–PSO) Field Programmable Gate Arrays (FPGA) Harmonics

1. Introduction Current harmonics generated by non-linear loads are causing great concern and have attracted special interests. The harmonics interfere with sensitive electronic equipment causing undesired power losses in electrical equipment. Array of problems are caused by harmonics such as overheating, frequent tripping of circuit breakers, frequent fuse blowing, capacitor failures, excessive neutral currents and power-metering inaccuracy. It is prerequisite to compensate harmonics to avoid unwanted losses and maintain electrical equipments with IEEE: 519 standards. Active power filtering has been an effective way for harmonic compensation. It works basically in two steps: Harmonic estimation and elimination. The harmonic components from the distorted signals are estimated and injected with the same magnitude but opposite phase into the system for its elimination. To estimate harmonics, Artificial Neural Networks have been used with back propagation algorithm, (Hartana and Richards, 1990). An analog neural method for harmonic isolation was presented by Pecheranin et al. (1994) which basically uses optimization technique and minimizes error. Also Dash et al. (1996) have presented a realization of linear combiner using an

n

Corresponding author. Tel.: þ91 431 2503267 E-mail address: [email protected] (B. Vasumathi).

0952-1976/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.engappai.2011.12.005

Adaptive Linear Neural Network called Adaline. Adaline uses a non-linear weight adjustment algorithm based on a stable difference equation. This is different from back propagation algorithm and allows better stability control and convergence speed. Particle Swarm Optimization (PSO) is a parallel evolutionary computation technique developed by Eberhart and Kennedy (1995). It is a stochastic, population-based evolutionary algorithm for problem solving. It can be seen as a kind of swarm intelligence technique that is based on social behavior and contributing to engineering applications. PSO always converges quickly towards the optimal positions. But it slows down its convergence speed on reaching the global optima (Shi and Eberhart, 1999). The performance of PSO is improved by altering inertial weight factor in a number of ways (Ko et al., 2009). This paper proposes a new approach by combining the inertia weight factor of PSO with weight updating rule of Adaline and training it with neural network. This hybrid ANN–PSO improves weight vector updating by reducing the error nearly to half its value, compared to when the ANN is trained only with Adaline algorithm using the stable difference error equation. Finally harmonic amplitudes are calculated from the weight vector obtained after convergence and hence harmonics are estimated. The proposed ANN–PSO algorithm is further implemented on Spartan 3E FPGA and the results are verified. ANN–PSO slows its convergence when the error is narrowed down to its minimum, but only by very few epochs. ANN–PSO thus, is an effective on-line harmonic estimation method with an error value being less than of Adaline and with speed of convergence comparable to Adaline.

B. Vasumathi, S. Moorthi / Engineering Applications of Artificial Intelligence 25 (2012) 476–483

2. Literature review To date many harmonic estimation techniques have been identified and used. The most effective approaches among these algorithms are Neural Network based algorithms. Many studies were reported on these algorithms and thus, the literature essentially deals with Neural Network algorithms. Also, the Particle Swarm Optimization algorithm deals with optimizing the results and is the one among recently used effective approaches for optimization. There are also hybrid approaches that combine both Neural and PSO algorithms, which serves more effective than when individually applied. The remaining of this review deals with PSO and hybrid combinations of Neural and PSO. 2.1. Adaptive neural networks (ANN) algorithms Many algorithms are available to evaluate harmonics, of which the Fast Fourier Transforms (FFT) developed by Cooley and Tukey (1965) is widely used and this forms the basic harmonic estimation technique. During the initial stages, harmonic source monitoring and identification of harmonics using Neural Networks were described by Hartana and Richards (1990). Then the process of harmonic detection was simplified and processing speed was improved by Pecheranin et al. (1994). Dash et al. (1996) presented the adaptive tracking of harmonic components using Linear Adaptive Neuron Adaline model. The learning rule adopted was Widrow–Hoff learning rule. The adaptive neural network algorithm for harmonic estimation in active power filters, was proposed by Rukonuzzaman and Nakaoka (2001). The concept of minimizing the mean square error by Least Mean square (LMS) or Widrow–Hoff learning rule and thus moving decision boundaries as far as possible from training patterns was made possible (Vazquez and Salmeron, 2003). A multi-layer feed forward neural network trained with back propagation was proposed by Villalva et al. (2004), these networks were able to adjust at each time step based on new input and target vectors. An improved on-line tracking scheme which combines fundamental frequency tracking with an Adaline based harmonic analyzer was introduced by Shatshat et al. (2004). The concept of optimizing the weight vector once and using it for online tracking of changes in amplitude and phase of fundamental and harmonic components in presence of noise was proposed by He and Xu (2008). Recently Luo et al. (2009) proposed a hybrid active power filter with an adaptive fuzzy dividing frequency control method. More recently Radzi and Rahim (2009) proposed a two layer neural adaptive filter, which uses Widrow–Hoff learning instead of normal single linear neuron model. A model predictive control strategy using neural networks is applied to the engine’s air/fuel ratio control in automotive engines with severe non-linear dynamics (Zhai and Yu, 2009).This model is adapted in an on-line mode to cope up with system uncertainty and time varying effects. An off-line and on-line learning direct adaptive neural controller for an unstable helicopter was proposed by Vijaya kumar et al. (2009). The neural controller is designed to a track pitch rate command signal generated using the reference model. A constructive learning algorithm was employed to design a nearoptimal one-hidden layer neural network (Meleiro et al., 2009). This model not only determines a proper number of hidden neurons but also the particular shape of the activation function for each node.

477

its convergence superiority. Again it was proved that the PSO had faster convergence towards optimal position but slows its convergence speed near minimum (Shi and Eberhart, 1999). Again the selection of parameters for Particle Swarm Optimization Algorithm was discussed (Shi, 2004). Then the concept of using adaptive inertia weights to improve PSO’s performance near optimum was suggested. In the years that follow, Trelea (2003) illustrated the trade-off between exploration–exploitation. The guidelines for graphical parameter selection were also derived. Ismael and Fernandes (2005) proposed a modified PSO algorithm by introducing gradient information or an approximate descent direction to enable the computation of all the global and local optima of a single multi-modal objective function. The advantages of Swarm intelligence such as Scalability, Fault tolerance, Adaptation, Speed, Modularity, Autonomy and Parallelism were identified by Asghari and Ardebilipour (2005) during the research for spread spectrum code estimation. A comparative study of five evolutionary based optimization algorithms: Genetic Algorithm, Memetic algorithms, Particle Swarm Optimization, Ant Colony Systems and Shuffled frog leaping was made by Ebeltagi et al. (2005), among which, PSO was found to be better than other algorithms in terms of success rate and solution quality, and the second best in terms of processing time. Some refinements of PSO were introduced by Banks et al. (2007) for the prevention of swarm stagnation and tackle dynamic environments. Stable convergence and good computational efficiency of PSO over Linear Quadratic Regulator (LQR) method was discussed by Nasri et al. (2007). 2.3. Hybrid algorithms In recent years hybrid optimization techniques started emerging with high and tremendous improvements in their performances. Da and Xiurun (2004), made a proposal for a Simulated Annealing PSO (SAPSO)—based ANN technique, which has better ability to escape from local optimum and this method is reported to be more effective than the conventional PSO-based ANN. Cui et al. (2005) came out with a hybrid method, which produces compact clustering results when globalized search is being performed in entire solution space than K-means algorithm. Recently, Luo et al. (2009), proposed a hybrid active power filter with an adaptive fuzzy dividing frequency-control method. Also, a Neural Network (NN) approach that has been trained with PSO was proposed by Das and Dulger (2009). A Paticle Swarm Optimization method with Time-Varying Evolution (PSO-NTVE) is being developed more recently by Ko et al. (2009) in which five PSO techniques: Time-Varying inertia weight factor(PSO-TVIW), Random Inertia Weight factor PSO (PSO-RANDW), PSO with TimeVarying Acceleration Co-efficients (PSO-TVAC), Time-Varying Non-linear function modulated inertia weight updating, and the one developed, Non-Linear time-Varying Evolution (PSO-NTVE) were brought under discussion. All these approaches showed improvement one after the other as accordingly specified, due to their modifications in the inertia weight factor. A similar approach with a hybrid combination of ANN and PSO, with the weight updating rule of ANN being modified with time factor and trained using ANN has been made in this proposal. Recently Nourani et al. (2009) proposed the concept of combining the wavelet analysis with ANN for prediction of Ligvanchi watershed precipitation.

2.2. Particles swarm optimization (PSO) algorithms

3. Harmonic estimation algorithm

The idea of Particle Swarm Optimization was proposed by Eberhart and Kennedy (1995). A comparative analysis of PSO with genetic algorithm was made by Shi and Eberhart (1998) to prove

A harmonic algorithm based on ANN–PSO has been developed. Here the input vector is trained using the new ANN–PSO weight vector updating rule. The combination of Widrow–Hoff delta rule

478

B. Vasumathi, S. Moorthi / Engineering Applications of Artificial Intelligence 25 (2012) 476–483

of Adaline with PSO weight factor proposed by Shi and Eberhart (1998) is used for updating the weight vector along with the stable difference error equation. The weight vector is trained till it reaches minimum convergence. Finally harmonic estimation is performed with the final set of weight vector obtained after convergence. 3.1. Adaline algorithm (adaptive neural network algorithm) The ANN approach adaptively isolates harmonics using Fourier linear combiner. The linear combiner is realized using a linear adaptive neural network called Adaline. Adaline has an input sequence, an output sequence and a desired response-signal. It also has a set of adjustable parameters called weight vector which are random generated initially (Valluvan and Natarajan, 2008). Weight vector of Adaline generates the Fourier co-efficients of the signal using a non-linear weight adjustment algorithm and a stable difference equation given by Eq. 1 (Dash et al., 1996). The weight updating rule given by Eq. 2 does not incorporate the time factor, which means that the instant of iteration, called the iteration number, is not considered. eðkÞ ¼ yðkÞy4 ðkÞ

ð1Þ

where, e(k) is error at time k; y(k) is actual signal amplitude at time k; y4(k) is estimated signal amplitude at time k; a is learning parameter (Reduction factor); X(k) is input vector at time k. The weight vector of Adaline is updated using Widrow–Hoff delta rule as, ! a e ðkÞXðkÞ wðkþ 1Þ ¼ wðkÞ þ ð2Þ X T ðkÞ:XðkÞ where, w(k) ¼[w1(k)yw2N(k)w2N þ 1(k)w2N þ 2(k)]T is weight vector at time k. The amplitude of harmonics is calculated further after error convergence, using the final updated weight vector. 3.2. Particle swarm optimization algorithm In PSO algorithm, each particle keeps track of its own position and velocity in the problem space. The position and velocity of a particle are initially random generated. Then, at each iteration, the new positions and velocities of the particles are updated. The updating rule of PSO is given by Eq. 3 and it incorporates the instant of iteration as a time factor for update. Since its introduction, many researches have been done to improve the original version of PSO. Initially, Shi and Eberhart (1998) used linearly varying inertia weight over iteration and it is given as wðkÞ ¼ wmin þ

iter max iter :ðwmax wmin Þ iter max

ð3Þ

where, iter max is maximum number of iterations; iter is current iteration number; wmin is minimum value of the inertia weight vector; wmax is maximum value of the inertia weight vector. This is referred to as Time-Varying Inertia Weight factor PSO (PSO-TVIW). In the Time-Varying Acceleration Co-efficient PSO (PSO-TVAC) method (Ko et al., 2009) the cognitive parameter and social parameter change according to, c1 ðkÞ ¼ c1min þ

itermax iter ðc1max c1min Þ iter max

ð4Þ

c2 ðkÞ ¼ c2max þ

itermax iter ðc2minx c2max Þ iter max

ð5Þ

where, c1max and c1min are maximum and minimum values of cognitive parameter; c2max and c2min are maximum and minimum values of social parameter This allows the particles to converge towards the global optimum at the end of the search. In the Time-Varying Non-linear function modulated Inertia Weight updating (Ko et al., 2009), the inertia weight factor is given by,  a iter max iter wðkÞ ¼ wmin þ :ðwmax wmin Þ ð6Þ iter max In addition cognitive and social parameters are updated in Non-linear Time-Varying Evolution PSO (PSO-NTVE) as,   itermax iter b :ðc1max c1min Þ ð7Þ c1 ðkÞ ¼ c1min þ iter max c2 ðkÞ ¼ c2max þ

  itermax iter g :ðc2max c2min Þ iter max

ð8Þ

where, a, b, g are constant co-efficients. Thus all these methods of Time-Varying PSO adapt ðitermax iter=iter max Þ as a common time factor for updating. This remains to be the main updating factor of PSO and so it can also be used with many other combinations for its improvement. 3.3. Adaptive neural network–PSO (ANN–PSO) algorithm The combination of Neural Network and PSO has proven their efficacy in many applications. Motivated by a variety of applications in the literature for the combined Neural Network and PSO algorithm, a new approach of combining PSO with Neural Network has been developed. This new approach combines the weight vector updating rule of Adaline proposed by Widrow– Hoff, with the updating rules of PSO for adaptive estimation of harmonics. The time factor ðiter max iter=iter max Þ is combined with the updating rule of Adaline to improve its performance. The newly developed updating rule is trained using Neural Network. Fig. 1 shows the block diagram of the proposed ANN–PSO algorithm for harmonic estimation The general form of a waveform for voltage or current from a non-linear load with an angular frequency o, is the sum of harmonics of unknown magnitudes and phases (Dash et al., 1996). It is represented as, yðtÞ ¼

N X

A l sinð lo t þ Fl Þ þ eðtÞ

ð9Þ

l¼1

where, Al is amplitude of harmonics; Fl is phase of harmonics; e(t) is noise; t is the time instant of measurement. The discrete-time version of the signal can be given as,   N X 2plk yðkÞ ¼ Al sin þ Fl þ eðkÞ ð10Þ Ns l¼1 yðkÞ ¼

XN l¼1

Ald cosFl :sin

2plk XN 2plk þ A sinFl :cos þ eðkÞ l ¼ 1 ld Ns Ns ð11Þ

where, Ns ¼(fs/fo) is sampling rate; fs is sampling frequency; fo is nominal system frequency. Input to ANN–PSO is given as  T 2pk 2pk 4pk 4pk 2N pk 2N pk XðkÞ ¼ sin cos sin cos . . .sin cos Ns Ns Ns Ns Ns Ns ð12Þ where, T refers the transpose of a quantity.

B. Vasumathi, S. Moorthi / Engineering Applications of Artificial Intelligence 25 (2012) 476–483

479

Fig. 1. Block representation of ANN–PSO algorithm.

The weight vector is updated using the combined ANN–PSO rule and is given as !   iter max iter a e ðkÞ XðkÞ wðkþ 1Þ ¼ wðkÞ þ : ð13Þ iter max X T ðkÞXðkÞ where, w(k) ¼[w1(k)yw2N(k)w2N þ 1(k)w2N þ 2(k)]T is weight vector at time k; e(k) is error at time k; y(k) is actual signal amplitude at time k; y4(k) is estimated signal amplitude at time k; a is learning parameter (Reduction factor); X(k) is input vector at time k; iter max is maximum number of iterations; iter is current iteration number. After the convergence of tracking error, the harmonic amplitudes can be calculated from the final set of updated weight vectors. The amplitude and phase of Nth harmonics are given by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi AN ¼ w20 ð2N1Þ þ w20 ð2NÞ ð14Þ and

fN ¼ tan1 fw0 ð2N1Þ=w0 ð2NÞg

ð15Þ

where, w0 is the weight vector obtained after convergence. This algorithm is very effective and accurate compared to normal Adaline algorithm. The error value of the stable difference error equation reduces after convergence when compared with original Adaline algorithm. The weight vector converges to the maximum value of adaptation and hence almost traces the harmonics as measured by FFT techniques. The rate of convergence is much faster when compared with FFT techniques providing an easier on-line measurement. The proposed algorithm thus provides a similar rate of convergence as that of Adaline but with minimum error value. Hence it proves to be

Table 1 Pseudo code for ANN–PSO algorithm.

Step1: Initialize ANN–PSO network Step2: Set number of input and output nodes Step3: Initialization Initialize weight values between  1.0 and 1.0 Select learning rate between 0 and 1.0 Step4: Calculation of weight values Initialize learning process Calculate error values (using equation) Update weight values Step5: Check for convergence of error While (Minimum Convergence) {Calculate final weight values} End Step6: Check for further convergence till Maximum number of iterations Step7: End

more advantageous and accurate than Adaline for on-line harmonic tracking. The pseudo code for ANN–PSO is given in Table 1.

4. Application of ANN–PSO In power engineering, distortion due to harmonics is the main issue. Hence isolation of harmonics is needed for better performance of power equipment. Initially, estimation is done through simulation using Matlab and the effectiveness of the algorithm is verified. In this proposal, the Personal Computer (PC) load current is taken to be the non-linear load current. The actual non-linear load current from the PC load obtained using Power Quality Analyzer (PQA) model CA 8332 of Chauvin-Arnoux is level shifted

480

B. Vasumathi, S. Moorthi / Engineering Applications of Artificial Intelligence 25 (2012) 476–483

using a current transformer and then discretized into 50 samples at an interval of 400 ms with a complete cycle span of 20 ms. From the discrete data obtained at different sampling instances, the wave form model in Fig. 2 is simulated using Matlab. This nonlinear load current serves as the input for adaptation from which harmonics are estimated and Total Harmonic Distortion (THD) is obtained. To compare the effectiveness of the algorithm, harmonic isolation is done through both Adaline and ANN–PSO algorithm. Matlab code is written for both the algorithms and the results are compared. In ANN–PSO, a training vector of 50  50 samples is formed. The number of rows of the training vector is chosen to be 50, so as to compute up to 25th harmonics order. The number of columns in training vector is chosen to be 50 to accommodate 50 samples taken over one cycle of 50 Hz. Error comparison for various learning rates for the first five error vector values are shown in Table 2. From the table, the learning rate (a) is chosen to be 0.02 where minimum error convergence is obtained. Initially weight vectors are random generated. The input along with the training vector is being trained to minimize error difference of the differential error equation and then to update the weight vector. The training process continues until the convergence of error to its least approximation towards zero. The convergence graphs of the error vector and weight vector for the learning rate of 0.02 are represented in Figs. 3 and 4. A similar convergence is obtained for learning rate of 0.04. To show the improvement of ANN–PSO over Adaline, comparison plots of error vector for learning rates of 0.02 and 0.04 are presented in Figs. 5 and 6. For a of 0.02, Adaline algorithm starts its convergence at 3rd epoch and attains maximum convergence to a minimum error of 0.0069 by 20th epoch. The error stays thereafter at 0.0069 even up to 5000 epochs and more. Also in

ANN–PSO algorithm, the convergence starts at 3rd epoch but reaches its convergence to minimum error of 0.0030 at 27th epoch as shown in Fig. 5b. Even when considered at epoch 20 for ANN–PSO the error vector stays approximately to a maximum value of 0.0057 and it is comparatively less than Adaline. For a of 0.04, Adaline algorithm starts its convergence at 3rd epoch and attains maximum convergence to a minimum error of 0.0104 by 20th epoch. The error stays thereafter at 0.0104 even up to 5000 epochs and more.

Fig.3. Error convergence graph for ANN–PSO.

Load Current Wave Form 3

Load Current (A)

2

1

0

-1

-2

-3 0

10

20 30 Sampling Instance

40

50

Fig. 2. Load current wave form of a PC simulated in Matlab.

Fig.4. Weight convergence graph for ANN–PSO.

Table 2 Error values for various learning rates (a). Error

a ¼0.007

a ¼0.008

a ¼ 0.009

a ¼ 0.01

a ¼ 0.02

a ¼0.03

a ¼ 0.04

a ¼ 0.05

a ¼0.06

e(1) e(2) e(3) e(4) e(5)

 0.7177  0.1484  0.1360  0.0662  0.2673

 0.49354  0.1001  0.0918  0.0430  0.1822

 0.3359  0.0667  0.0605  0.027  0.1222

 0.2260  0.0437  0.0386  0.0167  0.0804

0.0030 0.0048 0.0054 0.0052 0.0048

0.0057 0.0057 0.0057 0.0057 0.0057

0.0059 0.0058 0.0058 0.0058 0.0057

0.0060 0.0059 0.0059 0.0058 0.0058

0.0060 0.0060 0.0060 0.0059 0.0059

B. Vasumathi, S. Moorthi / Engineering Applications of Artificial Intelligence 25 (2012) 476–483

Error convergence of Adaline and ANN - PSO algorithm for α = 0.04

10

20

30

1 0 -1 0 -2 -3 -4 -5 -6 -7 -8 -9

40 Error vector

Error Vector

Error convergence of Adaline and ANN - PSO algorithm for α = 0.02 1 0 -1 0 -2 -3 -4 -5 -6 -7 -8 -9

Adaline ANN - PSO Number of epochs

20

30

20

30

40

Adaline ANN - PSO

Error convergence of Adaline and ANN - PSO algorithm for α = 0.04

40 Error vector

Error vector

10

10

Number of epochs

Error convergence of Adaline and ANN - PSO algorithm for α = 0.02 0.02 0.01 0 -0.01 0 -0.02 -0.03 -0.04 -0.05 -0.06 -0.07 -0.08 -0.09

481

Adaline ANN - PSO

0.02 0.01 0 -0.01 0 -0.02 -0.03 -0.04 -0.05 -0.06 -0.07 -0.08 -0.09

10

20

30

40

Adaline ANN - PSO

Number of epochs

Number of epochs Fig. 5. Error Convergence of ANN–PSO compared with Adaline for a ¼ 0.02. (a) Graph showing complete convergence, (b) expanded graph.

Also in ANN–PSO algorithm, the convergence starts at 3rd epoch but reaches its convergence to a minimum error of 0.0046 at 30th epoch as shown in Fig. 6b. For both the learning rates, an improvement of approximately 56% of error towards its convergence to its minimum is shown. It can be inferred that ANN–PSO converges more slowly and smoothly than Adaline which produces some oscillatory convergence. The reduction in error value in ANN–PSO can also be clearly defined. From the weight vector obtained after final convergence, the harmonics are calculated. Table 3 represents the comparison of estimated harmonics of both the algorithms with PQA from which ANN–PSO with a lower error percentage can be inferred. The overall error percentage of ANN–PSO is 0.6294% whereas in Adaline it is 0.7624%. This accounts for approximately 15% overall improvement in the isolation of harmonics with faster and uniform convergence for on-line tracking. To determine the exactness of ANN–PSO in isolating harmonics, both ANN–PSO and Adaline algorithms are compared with standard Power Quality Analyzer measurement of harmonics. The PQA model CA 8332 of Chauvin-Arnoux from which initial data for simulation are obtained adopts FFT technique for harmonic measurement and it is used for comparison.

5. FPGA implementation of ANN–PSO algorithm After validating the results through simulation, the algorithm is implemented on a Field Programmable Gate Array (FPGA) in real time. The real time PC loadcurrent wave form acquired from

Fig. 6. Error Convergence of ANN–PSO compared with Adaline for a ¼ 0.04. (a) Graph showing complete convergence, (b) expanded graph.

Table 3 Comparison of ANN–PSO and Adaline with PQA. Comparison of estimated harmonics of ANN–PSO and ANN with PQA Sl. no.

PQA

ANN–PSO

Error % PQA with ANN–PSO

ANN

Error % PQA with ANN

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 THD

68.4 18.1 57.8 11.6 39.7 4.7 19.5 2.6 5.4 4.6 4.2 4.1 4.3 2.7 1.5 4.0 2.0 3.1 2.3 1.5 1.5 1.5 0.5 1.5 127.1%

67.819 18.474 57.12 11.572 39.479 5.0708 19.685 2.7034 4.3645 4.3924 5.092 4.1013 5.6213 2.7911 1.4864 5.1955 2.5007 4.2065 3.0205 2.2436 1.859 2.8701 1.0939 2.9613 128.23%

0.8494  2.0245 1.1761 0.2379 0.5592  7.3125  0.9423  3.8248 19.1759 4.5130  17.5177  0.0317  23.5052  3.2639 0.9067  23.0103  20.0224  26.3045  23.8537  33.1432  19.3115  47.7370  54.2920  49.3466

67.769 18.503 57.169 11.537 39.473 5.0979 19.703 2.7181 4.3777 4.3761 5.1146 4.0893 5.6046 2.8164 1.4966 5.2049 2.5107 4.2018 3.0226 2.2501 1.8473 2.8804 1.1061 2.9548 128.85%

0.9227  2.1719 1.0922 0.5448 0.5708  7.8052  1.0328  4.3414 18.933 4.8674  17.8805 0.2585  30.3395  4.3074 0.2267  23.1479  20.3409  26.2221  23.9066  33.3363  18.8004  47.9239  52.8746  49.2351

Over all error %

0.6294%

0.7624%

482

B. Vasumathi, S. Moorthi / Engineering Applications of Artificial Intelligence 25 (2012) 476–483

Table 4 Error vector convergence of ANN–PSO compared with Adaline when generated in VHDL. Sl. no.

Error vector (Epoch 30) ANN–PSO

Fig. 7. Load current and supply voltage wave form of a PC.

PQA model CA 8332 of Chauvin-Arnoux used for implementation is shown in Fig. 7. VHSIC Hardware Description Language (VHDL) coding in ModelSim has been generated for implementation of ANN–PSO algorithm on Spartan 3E FPGA. VHDL coding for Adaline algorithm is developed, implemented and compared with ANN–PSO to prove the efficiency of the proposed algorithm. The implementation of the algorithms on Spartan 3E FPGA was initially difficult due to memory constraints. Three modules for error vector, weight vector and amplitude are generated individually for both the algorithms and implemented. Training vector of 50  50 samples of pure sinusoidal wave form is provided. Weight vectors are random generated initially. Load current waveform is discretized into 50 samples and this serves as an input for the algorithm. Both the algorithms are then trained for convergence. The error and weight vector modules update the error and weight vector after each iteration. The convergence graphs when plotted are similar to that obtained in simulation. The error vector of both the algorithms obtained after convergence are arrayed and compared as in Table 4. Adaline converges to a minimum error of 0.0069 at 30th epoch, comparatively ANN–PSO converges to an error of 0.0030. This improves the error by approximately 56%. On considering the entire error vector, an average error of 0.0070 is obtained in Adaline and 0.0035 in ANN–PSO. An improvement of 50% of error on an average is obtained when implemented in real time. The amplitude module captures the final set of weight vector obtained after convergence and calculates the amplitude of harmonics. To simplify the calculation of Total Harmonic Distortion (THD) only squares of amplitudes of harmonics are calculated. The harmonic amplitudes are tabulated and compared with the results of PQA as shown in Table 5. The overall error percentage of ANN–PSO when compared with PQA is 15.88% and that of Adaline when compared with PQA is 19.25%. It is evident that, approximately 16.95% improvement is shown in ANN–PSO over Adaline in harmonic isolation. The proposed algorithm is thus validated in real time.

6. Conclusion Harmonics need to be effectively estimated for elimination. Only an effective estimation can lead to efficient elimination.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Average error

0.0030 0.0032 0.0033 0.0035 0.0038 0.0040 0.0042 0.0044 0.0046 0.0048 0.0050 0.0052 0.0054 0.0055 0.0055 0.0056 0.0056 0.0056 0.0055 0.0053 0.0051 0.0049 0.0047 0.0045 0.0042 0.0039 0.0035 0.0032 0.0030 0.0027 0.0025 0.0023 0.0021 0.0020 0.0018 0.0018 0.0018 0.0018 0.0018 0.0018 0.0018 0.0019 0.0020 0.0021 0.0022 0.0023 0.0024 0.0026 0.0027 0.0029

Error difference

Adaline 0.0069 0.0069 0.0069 0.0069 0.0070 0.0070 0.0070 0.0070 0.0069 0.0069 0.0070 0.0070 0.0070 0.0070 0.0069 0.0069 0.0070 0.0070 0.0070 0.0070 0.0069 0.0069 0.0069 0.0070 0.0070 0.0070 0.0069 0.0069 0.0069 0.0070 0.0070 0.0070 0.0070 0.0069 0.0069 0.0070 0.0070 0.0070 0.0070 0.0069 0.0069 0.0070 0.0070 0.0070 0.0070 0.0069 0.0069 0.0069 0.0070 0.0070

0.0035 0.0070 Average improvement in error 50.0%

0.0039 0.0037 0.0036 0.0034 0.0032 0.0030 0.0028 0.0026 0.0024 0.0021 0.0020 0.0018 0.0016 0.0015 0.0014 0.0013 0.0014 0.0014 0.0015 0.0017 0.0018 0.0020 0.0022 0.0025 0.0028 0.0031 0.0034 0.0037 0.0039 0.0043 0.0045 0.0047 0.0049 0.0049 0.0051 0.0052 0.0052 0.0052 0.0052 0.0051 0.0051 0.0051 0.0050 0.0049 0.0048 0.0046 0.0045 0.0043 0.0043 0.0041 0.0035

A hybrid ANN–PSO is presented for harmonic estimation in active filters. The analysis shows that ANN-PSO converges to almost 50–56% less error value than Adaline algorithm. The task of minimizing the error already narrowed down, by half the amount is achieved. ANN–PSO improves in determining harmonics by almost 15–16% and hence provides an excellent harmonic estimation when compared with Adaline on implementation. The convergence rate of ANN–PSO is comparable with that of Adaline and this can provide excellent performance in on-line tracking of harmonics when compared with existing FFT based algorithms. In the long run, fineness of harmonic estimation is attained using ANN–PSO.

B. Vasumathi, S. Moorthi / Engineering Applications of Artificial Intelligence 25 (2012) 476–483

483

Table 5 Comparison of ANN–PSO and Adaline with PQA when generated in VHDL and implemented on FPGA. Comparison of estimated harmonics of ANN–PSO and ANN with PQA Sl. no. PQA

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

ANN–PSO

Error % PQA with ANN–PSO

Harmonic amplitude Harmonic squares amplitudes

Harmonic amplitude Harmonic squares amplitudes

4624.00 823.69 4290.25 635.04 3906.25 46.24 948.64 12.96 37.21 37.21 44.89 33.64 57.76 14.44 6.25 49.00 12.25 32.49 16.81 9.00 4.84 14.44 1.96 15.21

4599.43 825.19 4159.25 626.33 3851.99 47.10 951.86 13.39 34.89 35.34 47.50 30.81 57.99 14.27 4.05 49.45 11.46 32.41 16.71 9.22 4.93 15.09 2.19 16.06

THD% 154.59%

68.00 28.70 65.50 25.20 62.50 6.80 30.80 3.60 6.10 6.10 6.70 5.80 7.60 3.80 2.50 7.00 3.50 5.70 4.10 3.00 2.20 3.80 1.40 3.90

67.82 28.73 64.49 25.03 62.06 6.86 30.85 3.66 5.91 5.94 6.89 5.55 7.62 3.78 2.01 7.03 3.38 5.69 4.09 3.31 2.22 3.88 1.48 4.01

References Asghari, V., Ardebilipour, M., R., 2005. Spread spectrum code estimation by particle swarm algorithm. Int. J. Signal Proces. 2 (1), 268–272. Banks, A., Vincent, J., Anyakoha, C., 2007. A review of particle swarm optimization. Part I: background and development. Natural Comput. 6 (4), 467–484. Coolely, J.W., Tukey, J.W., 1965. An algorithm for the machine calculation of complex fourier series. Math. Comput. 19 (90), 297–301. Cui, X., Potok, T.E., Palathingal, P., 2005. Document clustering using particle swarm optimization. IEEE Swarm Intell. Sym., 189–191. Da, Y., Xiurun, G., 2004. An Improved PSO-based ANN with simulated annealing technique. Neuro Comput. 63, 527–533. Das, M.T., Dulger, L.C., 2009. Signature verification (SV) toolbox: application of PSO-NN. Eng. Appl. Artif. Intell. 22 (4–5), 688–694. Dash, P.K., Swain, P.D., Liew, A.C., Rahman, S., 1996. An adaptive linear combiner for on-line tracking of power system harmonics. IEEE Trans. Power Syst. 11 (4), 1730–1735. Eberhart, R., Kennedy, J., 1995. A New Optimizer using Particle Swarm Theory. In: Proceedings of Sixth IEEE International Symposium on Micro Machine and Human Science, pp. 39–43. Elbeltagi, E., Hegazy, T., Grierson, D., 2005. Comparison among five evolutionarybased optimization algorithms. Adv. Eng. Inform. 19 (1), 43–53. Hartana, R.K., Richards, G.G., 1990. Harmonic source monitoring and identification using neural networks. IEEE Trans. Power Syst. 5 (4), 1098–1104. He, S., Xu, X., 2008. Hardware/software co-design approach for an adaline based adaptive control system. J. Comput. 3 (2), 29–36. Ismael, A.F.V., Fernandes, E.M.G.P., 2005. Particle swarm algorithms for multi-local optimization. VII congreso de galego de estatistica e investigacion de operations. Guimaraes, 26. Ko, C.N., Chang, Y.P., Wu, C.J., 2009. A PSO method with nonlinear time-varying evolution for optimal design of harmonic filters. IEEE Trans. Power Syst. 24 (1), 437–444. Luo, A., Shuai, Z., Zhu, W., Fan, R., Tu, C., 2009. Development of hybrid active power filter based on the adaptive fuzzy dividing frequency-control method. IEEE Trans. Power Delivery 24 (1), 424–432. Meleiro, L.A.C., Von Zuben, F.J., Filho, R.M., 2009. Constructive learning neural network applied to identification and control of a fuel-ethanol fermentation process. Eng. Appl. Artif. Intell. 22 (2), 201–215. Nasri, M., Nezamabadi-pour, H., Maghfoori, M., 2007. A PSO-based optimum design of PID controller for a linear brushless DC motor. In: proceedings of world academy of science. Eng. Techon. 20, 211–215.

Error % PQA with ANN

Harmonic amplitude Harmonic squares amplitudes 0.26  0.10 1.54 0.67 0.70  0.88  0.16  1.67 3.11 2.62  2.84 4.31 0.26 0.53 19.60  0.43 3.43 0.18 0.24  0.10  0.09  2.11  5.71  2.82

153.64%

Over all error %

ANN

4592.58 837.16 4153.04 624.81 3851.81 47.61 956.16 13.53 35.10 35.08 45.02 30.63 57.84 14.33 4.10 49.62 11.54 32.34 16.73 9.27 4.96 15.20 2.24 15.99

67.77 28.93 64.44 25.00 62.06 6.90 30.92 3.68 5.92 5.92 6.71 5.53 7.61 3.39 2.03 7.04 3.40 5.69 4.09 3.05 2.23 3.90 1.50 4.00

0.34  0.80 1.62 0.79 0.70  1.47  0.39  2.22 2.95 2.95  0.15 4.66  0.13 10.79 18.80  0.57 2.86 0.18 2.44  1.67  1.36  2.63  7.14  2.56

153.80% 15.88%

19.25%

Nourani, V., Alami, M.T., Aminfar, M.H., 2009. A combined neural–wavelet model for prediction of Ligvanchai water shed precipitation. Eng. Appl. Artif. Intell. 22, 466–472. Pecheranin, N., Sone, M., Mitsui, H., 1994. An application of neural network for harmonic detection in active filters. In: Proceedings of IEEE International conference on Neural Networks: IEEE World Congress on Computational Intelligence, Vol. 6, pp. 3756–3760. Radzi, M.A.M., Rahim, N.A., 2009. Neural network and bandless hysteresis approach to control switched capacitor active power filter for reduction of harmonics. IEEE Trans. Ind. Electron. 56 (5), 1477–1483. Rukonuzzaman, M., Nakaoka, M., 2001. Adaptive Neural Network Based Harmonic Current Compensation in Active Filters. In: Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN’01), Vol. 3, pp. 2281–2286. Shatshat, R.E., Kazerani, M., Salama, M.M.A., 2004. On-line tracking and mitigation of power system harmonics using ADALINE-based active power filter system. In: Proceedings of Canadian conference on Electrical and Computer Engineering, (4), pp. 2119–2124. Shi, Y., 2004. Particle Swarm Optimization. Feature Article: Electronic Data Systems, Inc., IEEE Neural Network Soc. pp. 8–13. Shi, Y., Eberhart, R., 1998 Parameter Selection in Particle Swarm Optimization. In: Proceedings of Seventh IEEE Annual conference on Evolutionary Programming, pp. 69–73. Shi, Y., Eberhart, R.C., 1999. Empirical study of Particle Swarm Optimization. In: Proceedings of Twelvth IEEE International Conference on Artificial Intelligence (IJCA), pp. 1945–1950. Trelea, I.C., 2003. The particle swarm optimization algorithm: convergence analysis and parameter selection. Inf. Process. Lett. 85 (6), 317–325. Valluvan, K.R., Natarajan, A.M., 2008. Implementation of Adaline Algorithm on a FPGA for Computation of Total Harmonic Distortion of Load Current. ICGSTACSE J., Vol. 8, No. 2. pp. 35–41. Vazquez, J.R., Salmeron, P., 2003. Active Power Filter Control using Neural Network Technologies. IEE Electric Power App. 150 (2), 139–145. Vijaya kumar, M., Suresh, S., Omkar, S.N., Ganguli, R., Sampath, P., 2009. A direct adaptive neural command controller design for an unstable helicopter. Eng. Appl. Artif. Intell. 22 (2), 181–191. Villalva, M.G., de Siqueira, T.G., de Oliveira, M.E., Ruppert, F.E., 2004. Current controller with artificial neural networks for active filter. In: Proceedings of Second International Conference on Power Electronics, Machines and Drives, (2), pp. 626–631. Zhai, Y.J., Yu, D.L., 2009. Neural network model-based automotive engine air/fuel ratio and robustness evaluation. Eng. Appl. Artif. Intell. 22, 171–180.