Loadability margin calculation of power system with SVC using artificial neural network

Loadability margin calculation of power system with SVC using artificial neural network

ARTICLE IN PRESS Engineering Applications of Artificial Intelligence 18 (2005) 695–703 www.elsevier.com/locate/engappai Loadability margin calculatio...

238KB Sizes 3 Downloads 76 Views

ARTICLE IN PRESS

Engineering Applications of Artificial Intelligence 18 (2005) 695–703 www.elsevier.com/locate/engappai

Loadability margin calculation of power system with SVC using artificial neural network P.K. Modia,b,, S.P. Singhb, J.D. Sharmab a

Electrical Engineering Department, University College of Engineering, Burla-768018, India Electrical Engineering Department, Indian Institute of Technology, Roorkee-247667, India

b

Received 6 May 2003; received in revised form 12 December 2004; accepted 25 January 2005 Available online 11 March 2005

Abstract Voltage stability has become a major concern among the utilities over the past decade. With the development of FACTS devices, there is a growing interest in using these devices to improve the stability. In this paper a method using parallel self-organizing hierarchical neural network (PSHNN) is proposed to estimate the loadability margin of the power system with static var compensator (SVC). Limits on reactive generations are considered. Real and reactive power injections along with firing angle of SVC and bus voltage at which SVC is connected, are taken as input features. To improve the performance of network, K-means clustering is employed to form the clusters of patterns having similar loadability margin. To reduce the number of input features in each cluster, system entropy information gain method is used and only those real and reactive power injections, which affect the loadability margin most, are selected. Separate PSHNN is trained for each cluster. The proposed method is implemented on IEEE30 bus and IEEE-118 bus system. Once trained, the network produces the output, with accuracy and speed. The computation time is also independent of the system size and the load pattern. r 2005 Elsevier Ltd. All rights reserved. Keywords: Voltage stability; Neural networks; Static var compensator

1. Introduction During the last decade the voltage collapse phenomena have received an increasing attention throughout the world due to major failures. Several voltage stability indices and margins are proposed for voltage stability analysis. Some of these are based on eigenvalues and singular value analysis of the system Jacobian matrix. However the behavior of these indices is highly nonlinear and for the large system it is also computationally expensive. System load margins such as real power margin and reactive power margin have also been used Corresponding

author. Electrical Engineering Department, University College of Engineering, Burla 768018, India. Tel.: +91 663 2430891; fax: +91 663 2430204. E-mail addresses: [email protected] (P.K. Modi), [email protected] (S.P. Singh), [email protected] (J.D. Sharma). 0952-1976/$ - see front matter r 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.engappai.2005.01.006

in voltage stability analysis. The real power margin can be calculated using the point of collapse method or the continuation method or it can also be formulated as an optimization problem (Van Cutsem and Vournas, 1998). Artificial neural network (ANN) has gained popularity, because of its efficiency and speed, in approximating a function. It has been successfully used in various power system analyses (Vidyasagar and Rao, 1993). The applications of ANN have been proposed for voltage stability evaluation in (Jeysurya, 1994). With the development of high power semiconductor devices, it has been possible to use these devices in power system. This led to development of flexible AC transmission system popularly known as FACTS (Hingorani, 1993). Static var compensator (SVC) is one of the important thyristor controlled FACTS device, whose effectiveness in voltage control is well known. The application of SVC in electric power system has been discussed widely

ARTICLE IN PRESS 696

P.K. Modi et al. / Engineering Applications of Artificial Intelligence 18 (2005) 695–703

(Song and Johns, 1999). However neural networks have not been applied for the voltage stability analysis of a power system with SVC. Further in the available literature, limits on reactive generations are not considered. Parallel, self-organizing, hierarchical neural networks (PSHNN) are multi-stage networks in which stages operate in parallel rather than in series (Ersoy and Dneg, 1995). A method based on PSHNN has been proposed for voltage contingency ranking by determining the post contingent loadability margin with promising results (Pandit et al., 2001). However SVC has not been considered in this paper. In this work, a method using PSHNN is developed to estimate the loadability margin of a power system with SVC. Limits on reactive generations are considered. Real and reactive power injections along with firing angle of SVC and bus voltage at which SVC is connected, are taken as input features. Using K-means clustering technique (Nabney, 2000), input patterns having similar values of loadability margin are grouped, to improve the performance of the neural network. Using a feature extraction technique, based on system entropy method (Pao, 1989), real and reactive power injections affecting the loadability margin most are selected so that reduced input features are used to train the neural network thus reducing the size of the network. Care has been taken to generalize the neural network by using early stopping method and by generating a large set of training data points. The proposed method is applied to IEEE-30 bus and IEEE118 bus systems (power system test cases). This method is superior as compared to the methods proposed in earlier literature (Jeyasurya, 2000) in terms of accuracy. In (Jeyasurya, 2000) errors in order of 7–10% has been reported.

2. Static var compensator (SVC) model This model is based on representing the FACTS controller as variable impedance (Canizares, 1999a). The fixed capacitor (FC) with a thyristor controlled reactor (FC-TCR) configuration of the SVC is used in this analysis. The controller is composed of a fixed capacitor ðX C Þ; fixed reactor ðX L Þ; and a bi-directional thyristor valve, composed of two thyristors. The SVC is usually connected to the transmission system through a step-down transformer, which can be treated as other transformers in the system as shown in Fig. 1. A steady state circuit representation of the connection of the SVC through a step-down transformer is illustrated in Fig. 2. Where V i;svc is the voltage magnitude of the ith bus at which SVC is connected, V svc is the voltage across the controller, X TH is the impedance of the step-down transformer, Qsvc is the reactive power that the SVC injects into the power network, I svc is the current

V i , svc I svc

Magnitude

Filters

Q svc

a:1 Vsvc

+

Vsvcref

− Controller

Be (svc)

svc

Fig. 1. Basic model of SVC.

Vi , svc

XTH

Vsvc Qsvc

SVC

Be (svc)

Fig. 2. Steady state representation of SVC.

through SVC, Be is the equivalent admittance of SVC, V svcref ; is a reference voltage for the controller, X SL is the SVC control slope, asvc is the firing angle and asvcmin ; asvcmax represent the lower and upper limits on the firing angle. The steady state model of SVC can be represented as 2

3 V i;svc  V svcref  X SL I svc 6 7 6 pX L Be  2asvc þ Sin 2asvc þ pð2  X L =X C Þ 7 6 7 ¼ 0. 6 7 I svc  V svc Be 4 5 Qsvc  V 2svc Be (1) For the steady state model to be complete, all SVC controller limits should be adequately represented. From the V–I characteristic presented in Fig. 3, the limits of the device are I max for V i;svc varying from V min to V max ; with QSVC varying from QmaxC to QmaxL : At the firing angle limits, the SVC is transformed into a fixed reactance.

ARTICLE IN PRESS P.K. Modi et al. / Engineering Applications of Artificial Intelligence 18 (2005) 695–703

Vi , svc

697

Loadability Margin

Loadability Margin

Supervised Learning of PSHNN-1

Supervised Learning of PSHNN-k

Normalize Input and output features

Normalize Input and output features

Feature Selection using Entropy Method

Feature Selection using Entropy Method

Cluster-1

Cluster-k

XL || XC XSL

svcmax

Vsvcref (svc0)

svcmin

XC

I svc Fig. 3. V–I Characteristic of SVC.

3. Performance index Voltage stability margin is defined as the distance with respect to the bifurcation parameter, from the current operating point to voltage collapse point (Van Cutsem and Vournas, 1998). The system is assumed to be voltage secure if this margin is reasonably high. In this paper this voltage stability margin is referred to as loadability margin and calculated using the continuation method. The continuation method is implemented in a power flow program called UWPFLOW (Canizares, 1999b). SVC model presented above is also incorporated in UWPFLOW. 4. Proposed methodology The proposed methodology is depicted in Fig. 4, where ½P1 ; P2 ; . . . ; Pnp  is the vector of nonzero real power injections. There are such n numbers of vectors corresponding to each pattern and np is the number of buses at which real power injections are nonzero. ½Q1 ; Q2 ; . . . ; Qnq  is the vector of nonzero reactive power injections. There are such n numbers of vectors corresponding to each pattern and nq is the number of buses at which reactive power injections are nonzero. l is the loadability margin of the system with SVC. The proposed methodology is explained below. 4.1. Pattern generation Perturbing the loads at each bus randomly in wide range, n numbers of patterns are generated. By using UWPFLOW, continuation power flow analysis is performed to obtain the loadability margin for each case. Limits on reactive generations are considered.

K -means Clustering of Input Data

P1

. . . . Pnp

Q1 .

. . . Qnq Vi , svc

Fig. 4. Proposed methodology.

4.2. Feature selection Loadability margin is quantified in terms of real power margin as the distance from the current operating point to the voltage collapse point. Therefore real power information should be used to form input feature. Considering the real power injections at a particular bus incorporates the effects of real load and real generation at that bus. From the viewpoint of voltage stability, when a system is close to voltage collapse, a small increase in reactive power demand will cause a large increase in the need for reactive power generation. When the generation cannot meet demand, voltage instability occurs. Therefore, reactive power injections are also considered to be the input features. SVCs either supply, or absorb reactive power to/from the system depending upon the operating conditions of the power systems, by suitable control actions, which changes the parameters of the devices. This way, SVCs affect the loadability margin significantly. The voltage of the bus at which SVC is connected, effectively controls the amount of reactive power to be injected into the system, and have a potential role, in deciding the loadability margin. Therefore, bus voltage ðV i;svc Þ; along with firing ðasvc Þ of SVC is selected as

ARTICLE IN PRESS P.K. Modi et al. / Engineering Applications of Artificial Intelligence 18 (2005) 695–703

698

additional input variables. Thus the input vector becomes X ¼ ½P1 ; . . . ; Pnp ; Q1 ; . . . ; Qnq ; V i;svc ; asvc T .

(2)

The performance of the neural network can be improved by clustering the like patterns. In this work, most popular K-means clustering is proposed to form k clusters of input patterns having similar loadability margin. 4.3. Clustering of input features Clustering is the process of partitioning or grouping a given set of patterns into disjoint clusters. This is done such that patterns in the same clusters are alike and patterns belonging to two different clusters are different. The K-means clustering is a popular approach to finding clusters due to its simplicity of implementation and fast execution. It appears extensively in the machine learning literature and in most data mining suites of tools. The Kmeans algorithm is a method for finding k vectors mj (for j ¼ 1; . . . ; k) that represent entire dataset. The data is considered to be partitioned into k clusters with each clusters represented by its mean vector and each data point assigned to the cluster with the closest vector. Input patterns having similar values of loadability margin, are grouped using K-means clustering technique (Nabney, 2000). The K-means algorithm works iteratively. At each stage, the N data points ln are partitioned into k disjoint clusters S j each containing N j data points. The algorithm is as follows: (i) Initialize mj for k clusters by choosing randomly from the data points. (ii) Assign each data point to the cluster containing the closest mean vector by calculating the squared Euclidean distance between the each data points and centers of each clusters using d nj ¼ jjln  mj jj2 . (iii) Calculate mean vectors for each cluster using 1 X mj ¼ ln , N j n2S

(3)

(4)

j

where, mj ; is the center of the jth cluster, given by the mean of the data points belonging to the cluster. (iv) Calculate the error function using (5). This error function is minimized and is the total within-cluster-sum-of-squares: E¼

K X X j¼1 n2S j

(v) Stop, if there is no further changes to the error E; else go to step (ii). (vi) Return Sj and mj : (vii) Input patterns, corresponding to ln in each cluster are grouped, thus forming k disjoint sets of input–output. The number of variables becomes very large as the size of the power system increases. It is not necessary to use all the variables available to train the network. It will increase the number of input nodes and result in a complex structure requiring large training time. Considering the real and reactive power injections at all the buses will form a large input vector to neural network. A technique based on system entropy method (Pao, 1989) is proposed to select those real and reactive power injections only, which have more effect on loadability margin. This technique is described as follows: 4.4. Feature reduction by system entropy method The term entropy has been used to describe the degree of uncertainty about an event. A large value of entropy indicates high degree of uncertainty and minimum information about an event. The change in entropy for given information is defined as the information gain or entropy gain. By observing the loadability margin for real and reactive power injections variations at all the buses, the information gain is computed. The information gain is computed as follows: (i) Arrange the real power injections at each bus, for different patterns, in decreasing order and divide the whole range in ‘g’ groups. For each bus also arrange the value loadability margin, for different patterns in decreasing order and divide the whole range in ‘g’ groups. (ii) The probability of loadability group ‘i’ and real power injection group ‘j’ is calculated by nij Pij ¼ Pg for i ¼ 1; 2; . . . . . . . . . . . . g, (6) j¼1 nij where nij are the numbers of patterns common to group ‘i’ and ‘j’. (iii) For each loadability margin group ‘i’, the entropy H i is calculated by

g X 1 Hi ¼ Pij ln . (7) P ij j¼1 (iv) The average entropy H avg and information gain G are given using

jjln  mj jj2 .

(5)

1X Hi; g i¼1 g

H avg ¼

G ¼ H 0  H avg ,

ð8Þ

ARTICLE IN PRESS P.K. Modi et al. / Engineering Applications of Artificial Intelligence 18 (2005) 695–703

where H 0 is the maximum entropy value corresponding to the condition when probability of all ‘g’ groups is equal to ð1=gÞ: (v) The buses are ranked according to the magnitude of their information gain i.e. the real power injections, which affects the system loadability margin most and first mp buses are selected. Real power injections at these buses are used as input features. (vi) Repeat steps-(i)–(v), for selecting the reactive power injections also and the reactive power injections at first mq buses are also used as input features. (vii) Repeat steps- (i)–(vi) for another cluster separately until all the clusters are considered.

By this method, the number of real power injections is reduced to mp from np. Similarly the number of reactive power injections is reduced to mq from nq. The reduction of feature by the system entropy method yields the following input vector for each cluster: X ¼ ½P1 ; . . . ; Pmp ; Q1 ; . . . ; Qmq ; V i;svc ; asvc T .

(9)

4.5. Data normalization During training of a neural network, the higher valued input variables may tend to suppress the influence of smaller ones. To overcome this problem, the neural networks are trained with normalized input data, leaving it to network to learn weights associated with the connections emanating from these inputs. The raw data are scaled in the range of 0.1–0.9, for use by neural networks to minimize the effects of magnitude between inputs. Input data thus generated are normalized for each cluster. Normalization is done according to (10): xn ¼

0:8ðx  xmin Þ þ 0:1, ðxmax  xmin Þ

(10)

where xmin and xmax are the minimum and maximum value of data parameter x: In case of output variables, very high numerical values are difficult to realize by the activity function. Output variables, therefore, may be normalized by (11): li;n ¼

li , lmax

(11)

where, li ; is the loadability margin of ith pattern, lmax ; is the maximum loadability margin of n patterns and li;n ; is the normalized value of loadability margin for ith pattern.

699

4.6. Parallel self-organizing hierarchical neural networks (PSHNN) PSHNN are multi-stage networks in which stages operate in parallel rather than in series (Ersoy and Dneg, 1995). Each stage is a particular neural network, referred to as the stage neural network (SNN). All the input vectors are presented to all the stages after nonlinear transformation. By implementing the stages in parallel, the speed of processing with several stages is almost the same as with one stage. Single output is assumed. In the proposed network, SNNi represents the ith stage neural network. Any training algorithm may be chosen to train each stage neural network. SðnÞ is the input vector, and ld ðnÞ is desired output vector i.e. loadability margin. W ðnÞ; X ðnÞ; Y ðnÞ; and ZðnÞ are obtained by nonlinear transformation, NLT1 ; NLT2 ; NLT3 ; NLT4 ; of SðnÞ; respectively. All nonlinear transformations are different. After SNNi is trained, the error signal ei of SNNi is taken as the desired output of SNNiþ1 : This process of adding stages is continued until the final error is negligible. A PSHNN consisting of four stages is proposed in this work. The PSHNN architecture with four stages is shown in Fig. 5. Each SNNi is trained using scaled conjugate algorithm (Moller, 1993). The training algorithm of PSHNN is as follows:

(i) Train SNN1 for ld ðnÞ: After SNN1 is trained, the error signal for the first stage is e1 ðnÞ ¼ ld ðnÞ  o1 ðnÞ.

(12)

(ii) Use e1 ðnÞ as the desired output of SNN2 ; and X ðnÞ as the input signal to train SNN2 : After SNN2 is trained, the error signal for the second stage is e2 ðnÞ ¼ e1 ðnÞ  o2 ðnÞ.

(13)

(iii) Use e2 ðnÞ as the desired output of SNN3 ; and Y ðnÞ as the input signal to train SNN3 : After SNN3 is trained, the error signal for the third stage is e3 ðnÞ ¼ e2 ðnÞ  o3 ðnÞ.

(14)

(iv) Use e3 ðnÞ as the desired output of SNN4 ; and ZðnÞ as the input signal to train SNN4 : After SNN4 is trained, the error signal for the fourth stage is (15) e4 ðnÞ ¼ e3 ðnÞ  o4 ðnÞ. (v) With a k-stage network, SNN1 ; SNN2 ; . . . ; SNNk ; are trained, followed by the retraining of SNNk1 ; SNNk2 ; . . . ; SNN2 : This constitutes one sweep and referred to as forward–backward

ARTICLE IN PRESS P.K. Modi et al. / Engineering Applications of Artificial Intelligence 18 (2005) 695–703

700

d

S (n)

NLT1

W (n)

SNN1

o1 (n)

P1

... Pmp NLT2

X (n)

SNN2

o2 (n)

e1 (n)

Q1

... Qmq

NLT3

Y (n)

e2 (n) SNN3

Vi , svc

NLT4

Z (n)

o3 (n)

e3 (n) SNN4

o4 (n) e4 (n)

Fig. 5. Four stages PSHNN.

training. With four stages neural network, after all the four stages are trained, retrain SNN3 and SNN2 : (vi) Calculate the final output of the network as lf ðnÞ ¼ o1 ðnÞ þ o2 ðnÞ þ o3 ðnÞ þ o4 ðnÞ.

(16)

(vii) Calculate mean square error (MSE) for training patterns as well as testing patterns at the end of each sweep: MSE ¼

n 1X ½ld ðiÞ  lf ðiÞ2 . n i¼1

(17)

(viii) Train PSHNN for a number of sweeps i.e. repeat steps- (i)–(vii) until MSE for testing patterns is reduced to minimum. (ix) Design another four stages PSHNN for next cluster and train for loadability margin following the steps- (i)–(viii).

4.7. Generalization The network is generalized by using a large training set and employing early stop method. The performance of network is tested by the set of testing patterns at the completion of every sweep of training. The training of network is stopped, when the MSE of testing patterns increases consecutively for a specified number of sweeps. Weights at the minimum of testing error are retained.

5. Results and discussions 5.1. IEEE-30 bus system It is assumed that a SVC of 7100 MVAR is placed at bus-30. Varying the loads at each bus randomly in the range of 50–150%, a total of 6000 load patterns are generated and loadability margin is found for each case, using UWPFLOW. Generated patterns are divided into two clusters using K-means algorithm. Details of clustering are shown in Table 1. For each clusters, input features to the network are selected by system entropy method. The real and reactive power injections at each bus are ranked according to information gain in descending order. As, there is a trade off between the training time and accuracy of training. The real and reactive power injections of first 15 buses affecting the system loadability margin most; are selected and given in Table 2. Additionally, voltage at SVC bus i.e. bus-30 ðV 30 Þ; firing angle ðasvc Þ; of the SVC are selected. This forms a total of 32 inputs. Input data are normalized using (10). As the numerical value of output is small, it is not normalized and kept in original form i.e. loadability margin in per unit on a base of 100 MVA. In each cluster 2500 patterns are used for training and the remaining patterns are used for testing. The training set data points are kept sufficiently large to avoid overfitting. Separate PSHNN are used for each cluster. Each SNN is trained by means of scaled conjugate algorithm. The number of epochs in each SNN is kept 100 per sweep. Different sets of activation function are

ARTICLE IN PRESS P.K. Modi et al. / Engineering Applications of Artificial Intelligence 18 (2005) 695–703

701

Table 1 Clustering details of patterns System

Cluster no.

Center of cluster (loadability margin in p.u.)

No. of training patterns

No. of testing patterns

IEEE-30 bus

1 2 1 2

1.9981 1.5821 37.8964 34.8583

2500 2500 1950 1850

570 430 100 100

IEEE-118 bus

Table 2 Buses selected by system entropy method System

Cluster no.

Buses selected for real power injections

Buses selected for reactive power injections

IEEE-30 bus

1

1, 21, 5, 8, 12, 17, 24, 19, 15, 30, 7, 20, 4, 14, 23,

8, 11, 2, 1, 5, 21, 13, 30, 12, 17, 24, 19, 15, 7, 20,

2

1, 5, 8, 7, 21, 17, 24, 12, 15, 19, 18, 30, 16, 20, 10,

1, 30, 8, 5, 11, 2, 7, 21, 17, 24, 12, 15, 19, 18, 13,

1

1, 15, 11, 6, 13, 18, 60, 3, 16, 59, 36, 106, 93, 39, 34, 80, 70, 62, 92, 19, 90, 22, 69, 2, 52, 82, 12, 35, 27, 45, 118, 110, 23, 96, 28, 88, 50, 86, 97, 76,

8, 12, 26, 24, 73, 113, 4, 65, 18, 20, 42, 6, 11, 36, 72, 25, 49, 13, 70, 60, 3, 16, 40, 46, 59, 61, 106, 62, 93, 39, 110, 104, 1, 66, 92, 100, 19, 90, 105, 22,

2

106, 110, 107, 80, 1, 105, 6, 104, 59, 13, 21, 28, 90, 20, 48, 74, 84, 58, 3, 7, 43, 93, 17, 54, 92, 101, 62, 94, 47, 66, 86, 96, 42, 41, 49, 16, 27, 115, 88, 114,

8, 12, 107, 26, 104, 100, 106, 99, 110, 105, 73, 72, 65, 6, 4, 34, 116, 113, 74, 18, 111, 25, 36, 15, 13, 21, 28, 70, 59, 112, 24, 66, 48, 84, 58, 3, 1, 7, 90, 92,

IEEE-118 bus

Table 3 Testing errors for cluster-1 of IEEE-30 bus system Sl. no.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Loadability margin (p.u.) Target

Output

1.8070 2.1208 1.9930 1.9987 1.8298 1.8379 1.8807 2.0004 1.7921 1.8177 1.8771 2.1923 1.8112 2.0134 2.0154

1.8143 2.1095 1.9895 1.9882 1.8265 1.8349 1.9008 2.0079 1.7944 1.8204 1.8733 2.2019 1.8118 2.0182 2.0023

Errors

0.0073 0.0113 0.0035 0.0105 0.0034 0.0030 0.0201 0.0075 0.0023 0.0027 0.0038 0.0096 0.0006 0.0048 0.0131

Table 4 Testing errors for cluster-1 of IEEE-118 bus system Errors (%)

0.4051 0.5333 0.1736 0.5263 0.1831 0.1616 1.0677 0.3769 0.1267 0.1496 0.2019 0.4397 0.0309 0.2389 0.6505

used in each SNN. Early stopping method is used so that the network generalizes well. The optimum number of hidden neurons is found to be 6. Thus the neural network architecture for each SNN is 32-6-1 for both the clusters. Due to limited space, the loadability margin for the first 15 patterns only of testing set of cluster-1 is shown in Table 3. Final results are summarized in Table 5. The MSE are 1.2575E-04 and 5.0414E-04 for the first and second cluster of the testing set, respectively. The maximum absolute percentage errors are 1.7178 and 3.9464 for the first and second cluster of the testing set respectively.

Sl. no.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Loadability margin (normalized) Target

Output

0.9148 0.9505 0.9379 0.9212 0.9200 0.8958 0.9419 0.9383 0.8887 0.9072 0.8916 0.9605 0.9294 0.9295 0.9273

0.9272 0.9589 0.9382 0.9235 0.9282 0.9052 0.9453 0.9375 0.8883 0.9165 0.8944 0.9654 0.9340 0.9249 0.9150

Errors

Errors (%)

0.0123 0.0084 0.0003 0.0023 0.0082 0.0093 0.0035 0.0009 0.0004 0.0093 0.0027 0.0049 0.0046 0.0046 0.0123

1.3467 0.8827 0.0352 0.2464 0.8913 1.0426 0.3684 0.0906 0.0495 1.0285 0.3062 0.5081 0.4982 0.4938 1.3264

5.2. IEEE-118 bus system It is assumed that a SVC of 7200 MVAR is placed at bus-20. Varying the loads at each bus randomly in the range of 50–150%, a total of 4000 load patterns are generated and loadability margin is found for each case, using UWPFLOW. Generated patterns are divided into two clusters using K-means algorithm. Details of clustering are shown in Table 1. For each clusters, input features to the network are selected by system entropy method. The real and reactive power injections at each bus are ranked

ARTICLE IN PRESS P.K. Modi et al. / Engineering Applications of Artificial Intelligence 18 (2005) 695–703

702 Table 5 Testing errors summary

Table 6 Comparison of proposed method with conventional method

System

Cluster no.

Max. absolute percentage error

Mean square error

IEEE-30 bus

1 2 1

1.7178 3.9464 3.9875

1.2575E-04 5.0414E-04 1.0797E-04

2

3.4632

1.4865E-04

IEEE-118 bus

System

IEEE-30 bus IEEE-118 bus

according to information gain in descending order. The real and reactive power injections of first 40 buses are selected and given in Table 2. Additionally, the voltage at SVC bus i.e. bus-20 ðV 20 Þ; firing angle ðasvc Þ; of the SVC are selected. This forms a total of 82 inputs. Input data are normalized using (10). As the numerical value of output is large, it is normalized using (11). In each cluster 100 patterns are used for testing and the remaining patterns are used for training. The training set data points are kept sufficiently large to avoid overfitting. Separate PSHNN are used for each cluster. Each (SNN) is trained by means of scaled conjugate algorithm. The number of epochs in each SNN is kept 100 per sweep. Different sets of activation function are used in each SNN. Early stopping method is used so that the network generalizes well. The optimum number of hidden neurons is found to be 15. Thus the neural network architecture for each SNN is 82-15-1 for both the clusters. To limit space, the loadability margin for the first 15 patterns only of testing set of cluster-1 is shown in Table 4. Final results are summarized in Table 5. The MSE are 1.0797E-04 and 1.4865E-04 for the first and second cluster of the testing set respectively. The maximum absolute percentage errors are 3.9875 and 3.4632 for the first and second cluster of the testing set respectively. After the network is trained, the CPU time for one unseen pattern has been computed and compared with the conventional continuation power flow method (Table 6). This method takes less CPU time as compared to conventional method. Further, the computation time is independent of the system size and the load pattern. By using conventional method, the computation time increases with the size of system and is also dependent on the initial load pattern. All the computation reported in this paper has been conducted on a personnel computer based on Pentium-III 866 MHZ processor.

6. Conclusions In this paper a method using parallel self-organizing hierarchical neural network for the estimation of the loadability margin of power system with SVC is

Cluster no.

1 2 1 2

CPU time for one unseen load pattern (sec) Using PSHNN

Conventional method

0.22 0.22 0.22 0.22

2.32 2.56 7.12 7.03

proposed. Limits on reactive generation are considered while calculating the loadability margin. Separate PSHNN is designed and trained for each cluster. Once trained this method is able to estimate the loadability margin for the unknown pattern instantaneously. The proposed method is fast and accurate. Hence it can be used for online monitoring of voltage stability status for any power system. The clustering of patterns improves the performance of the neural networks significantly. By using system entropy method, the dimension of input features is effectively reduced and proper features are selected.

7. Acknowledgement This work has been carried out during Ph.D. program of first author at Alternate Hydro Energy Centre, Indian Institute of Technology, Roorkee, under quality improvement program of MHRD, Government of India.

References Canizares, C.A., 1999a. Modeling of TCR and VSI based FACTS controller. Internal Report, ENEL-POLIMI, Milan, Italy. Available at http://www.power.uwaterloo.ca Canizares, C.A., 1999b. UWPFLOW: continuation and direct methods to locate fold bifurcations in AC/DC/FACTS power systems. Department of Electrical and Computer Engineering, University of Waterloo, Canada. Ersoy, O.K., Dneg, S.W., 1995. Parallel self-organizing hierarchical neural network with continuous inputs and outputs. IEEE Transactions on Neural Networks 6 (5), 1037–1044. Hingorani, N.G., 1993. Flexible AC transmission. IEEE Spectrum, April, pp. 40–45. Jeysurya, B., 1994. Artificial neural networks for power system steady state voltage instability evaluation. Electrical Power System Research 29 (2), 85–90. Jeyasurya, B., 2000. Artificial neural networks for on-line voltage stability assessment. IEEE Power Engineering Society Summer Meeting (4), 2014–2018. Moller, M.F., 1993. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6, 525–533. Nabney, I., 2000. Netlab: Algorithms for Pattern Recognition. Springer, London.

ARTICLE IN PRESS P.K. Modi et al. / Engineering Applications of Artificial Intelligence 18 (2005) 695–703 Pandit, M., Srivastava, L., Sharma, J., 2001. Contingency ranking for voltage collapse using parallel self-organizing hierarchical neural network. International Journal of Electrical Power & Energy Systems 23 (5), 369–379. Pao, Y.H., 1989. Adaptive Pattern Recognition and Neural Networks. Addison-Wesley, Reading, MA. Power system test cases. Available at http://www.ee.washington.edu/ research/pstca.

703

Song, Y.H., Johns, A.T., 1999. Flexible AC transmission systems (FACTS). The Institution of Electrical Engineers, London, UK. Van Cutsem, T., Vournas, C., 1998. Voltage Stability of Electric Power System. Kluwer Academic Publishers, Boston. Vidyasagar, S.V., Rao, N.D., 1993. Artificial neural networks and their applications to power systems: a bibliographical survey. Electrical Power System Research 28 (1), 67–79.