Copyright © IFAC 12th Triennial World Congress, Sydney, Australia, 1993
GENERATOR RESYNCHRONISATION PREDICTION USING AN ARTIFICIAL NEURAL NETWORK A,K. Famellarls and C.D. Vournas Electrical Energy Systemf Laboratory. National Technical University. Patission 42. Athens. Greece
Abstract: In this paper. an Artificial Neural Network is used for the prediction of pole slips of a synchronous generator. which is in the process of resynchronisation after a fault. The measurements considered are the rotor speed and angle. at the moment the circuit breaker recloses. Given these inputs. the Artificial Neural Network suggested is capable of predicting whether the generator will resynchronise or it will skip a pole. This prediction is very fast. so that it could be used in real-time control. The method proposed has been tested with a simulation program based on a synChronous generator model with seven state variables. including also the saturation in the d-axis .
Key Words: Electric Generator. Pole Slip. Neural Nets. Backpropagation Algorithm . Power System Control . Prediction. Transient Stability. rones . These computational elements operate in parallel. So an ANN can be defined as a massively parallel system with an organisation similar to biological neural nets . Depending on the organisation of neurones and the way they process information many types of such networks have been applied for prediction and control.
1. INTRODUCTION When a fault occurs in a Power System. the generator protection system opens the circuit breaker and disconnects the machine. Some instants later (e.g . some tenths of a second) the circuit breaker attempts to reclose. so that the normal operation can be resumed in case the fault has been cleared. Meanwhile the generator has lost synchronism and the question arises whether it will be able to resynchronise at the time the circuit breaker recloses.
One of the most widely used Artificial Neural Network is the one based on the Backpropagation Model. Fig . 1 shows the typical architecture of such a network.
Output Pattern
Because of the fast phenomena involved real time control is not possible. unless an equally fast prediction of synchronising conditions is available. In this paper an Artificial Neural Network (ANN) is proposed for the real-time prediction of such conditions .
Output
k
An ANN is a massively parallel system . capable of processing large amounts of information in very short time. The roots of the research in Neural Networks and Parallel Distributed Processing can be found in the research work of neurologists. But these techniques became popular during the 1960s with the work of Minsky (1958) and Rosenblatt (1962) . The interest in Neural Networks was renewed with the work of Rumelhart and McClelland (1989) . and Kohonen (1977. 1984). In recent years Artificial Neural Networks. appear in a number of applications . both for prediction and control (Miller et al .. 1990; Webb and Lowe. 1988). In the Power Systems area these applications include Load Forecasting (Park and EI-Sharkawi. 1990). identification (Hartana and Richards . 1990) and adaptive controllers of generator systems (WU and Hogg. 1988) .
Layer
Hidden Layer
Input
Input Pattern Fig. 1. Typical Architecture of a Backpropagation ANN. The typical Backpropagation network consists of an input layer. an output layer and a number of hidden layers. Each layer is fully connected to the previous one. Each layer contains a number of neurones and each neurone in a layer is connected to the neurones of the previous layer with different weights. So the input in a neurone (which does not belongs to the input layer) is the weighted sum of all the out-
2. THE BACK PROPAGATION ALGORITHM As stated in the Introduction. an ANN is a system of interconnected computational elements . which are called neu-
1089
puts of the neurones of the previous layer. The output of a neurone is produced by passing the input through a transfer function . If 0 is the output of a unit in a layer a ,the input of a neurone j which belongs to the next layer b can be expressed as:
net j = IW ji Oj
(1)
i
where i the node number of layer a and j the node number of layer b, while the output of a unit j in layer b is:
0 j= f(net j ) where f is the activation function.
(2)
The goal of the training process is to minimise the cumulative error E for all training patterns. To achieve this goal the generalised delta rule consists of the following steps: Step 1: The input pattern p is propagated forward through the network and the output for each unit and for every layer is computed using the initial (and subsequently the corrected) weights . The output value of every output node j is then compared with the corresponding target value for the output node j and the following error signal is calculated: ~pj = (tpj - 0pj)' f'(net pj )
(5)
where f is the derivative of the activation function . Step 2: The weights for all the connections with the output layer are updated by adding the following correction: Llw ji = n~pjOpi (6) where n is the learning rate. The output for the node i is known from the feedforward pass.
Step 3: For all layers other than the output layer, ~ . is calJ culated recursively, starting with the layer previofs to the output using the equation :
(ne~-9j)
effective Input
~pj = f'(netpj)I~ kWk k
Fig. 2. Sigmoid activation function Many activation functions have been proposed in the literature, the most practical and convenient of which are the linear function, the limiting function, the threshold function , and the sigmoid function . The latter type of activation function (sigmoid) is shown in Fig. 2 and has the following activation formula (Rumelhart and McClelland, 1989):
1
of'
= f(net f· ) = - ( t -8 ) 1 + e ne j j
(3)
The sigmoid function is preferable to the other activation functions mentioned because it is continuous, non-linear, non decreasing and differentiable. An ANN . can be trained by providing a training set of input and output vectors and then adapting the weights between neurones so that the generated outputs by the ANN are as close as possible to the desired outputs. In this paper the generalised delta rule is used as the training algorithm. To measure how close to the desired values are the feedforward outputs supplied by the ANN. the following function proposed by Rumelhart and McClelland (1989) was used. In this function the error depends on the difference between the desired value tpj and the neural network output Opj for every pattern and for every output node. (A pattern IS defined as a pair of input, output vectors) .
p
J
(7)
The new weights are calculated by adding to the weights used in the feedforward pass the correction value calculated in the previous step. Steps 1 to 3 are recursively repeated until the error E is less than a desired value or a specified maximum number of iterations is reached.
3. RESYNCHRONISATION AND POLE SLIPPING
where Sj is the threshold of the activation function . As is obvIous by Inspecting equation (3) the output of the sigmoid function varies continuously from 0 to 1.
E = IO,5 · I(tpj - Op/
P
(4)
j
where j is the jth node of the output layer and p is a pattern index.
1090
A synchronous machine connected to a power system can pull out of synchronism , due to a variety of reasons, such as a short circuit, breaker opening, sudden loss of load, extreme underexcitation etc. In such cases the machine is either disconnected from the network, or allowed a limited period of pole slipping (Hariharan, 1976). In this paper we consider a synchronous generator connected to an infinite bus, i.e. to a large Power System, whose voltage and frequency can be assumed constant. When a fault such as a short circuit occurs in this system the generator circuit breaker opens, After a short time interval the circuit breaker closes again, trying to reconnect the generator to the network. Supposing that when the circuit breaker recloses the fault has been cleared, there are three possibilities: The generator returns to synchronism at the nominal operating point (speed equal to synchronous speed and angle deviation equal to zero) . The generator fails to resynchronise. - The generator re synchronises after a number of pole slips. In this case the generator speed is equal to synchronous speed and its angle deviation is a multiple of 2n radians . Usually in transient stability analysis, one is concerned with the first case, i.e. the resynchronisation without pole slips .
However, the method suggested in this paper, can be used to predict the exact number of pole slips before resynchronisation . The number of poles slips , depends upon the value of rotor speed wand rotor angle ~ of the generator at the time the circuit breaker attempts reclosure. The rotor speed can be higher or lower than the synchronous speed depending on the type of fault.
9 8
7 6 5
The angle ~ at the time of reclosure depends on the exact history of the fault, but for the purpose of our analysis it can be assumed to be a random variable, having the same probability to lie anywhere in the interval (-n , n) .
4
3 2
The most obvious , though time consuming, method for the prediction of generator stability is the simulation of the machine equations for different initial conditions . In this paper digital simulation has been used for training the ANN. and as a measure of its accuracy. The synchronous generator has been simulated using a standard model with seven state variables (Concordia, 1952) . Saturation in the direct axis (Krause, 1986) and a Type 1 excitation system (IEEE, 1963) have also been included in the generator model.
1 -20
3
4
2
2 4
5
2
0
3
4
0
0
2
3
0
0
0
0
2
3
0
3
4
0
3
92
0
96
100
2
104
10
20
p.s.i. = 0 if the generator resynchronises without pole slip. = 1 if the generator slips a pole.
92 5
0
Fig. 4. System trajectory with pole slip. (Rotor angle in rad Vs. speed deviation in rad/sec)
The generator has been simulated for a wide range of initial speed and rotor angle conditions . The initial speed Wo ranged between 90% and 110% of synchronous speed in 2% steps, while for each initial speed 9 initial values of ~o in the interval (-n , n) in n/4 steps were used. For each node of this grid, the number of the resulting pole-slips is shown in Fig. 3. The stability region is roughly shown by the solid line drawn in this Figure.
n n 2 0 n 2 -n
-10
n n 2 0 n 2 -n
The method for estimating the variable p.s.i. is described below: Step 1: The plane w-~ is spanned with a grid of equally spaced w-~ pairs. Step 2: The system is simulated for these initial values of w and 1) and the pole-slip indicator p.s.i. is determined . Step 3: The learning set of the ANN. is formed using the w-~ pairs and the corresponding value of p.s.i. Step 4: An ANN. with two inputs (w and ~) and one output (p.s.i.) is formed . Step 5: The ANN. is trained using the Backpropagation Algorithm , and the weights of the network are adapted . Step 6: The value of the p.s.i. for a large number of w - ~ pairs is finally calculated by the ANN . and is discretised to 0 or 1. In this way the stability region in the w-~ plane is calculated. A network with two hidden layers with twelve node on each hidden layer with a sigmoid transfer function for every node was selected for this application . The learning set of patterns for the ANN. has been formed using the synchronous generator simulation results , shown in Fig . 3. The ANN. with the configuration described above has been trained with a learning rate of O.B. This learning rate has been chosen by a trial and error method, to achieve the best convergence speed without oscillations . The training of the ANN . converged after 500 iterations to an error less than 10-4
108
I ni t i al Speed (%) Fig . 3. Number of pole slips in Wo - ~o plane. Simulation Results . Fig . 4 shows the system trajectory in the w-~ plane for initial values of the synchronous generator, at the time the circuit breaker reclosure wO= 106% and ~0=n/2 . As seen in Fig . 4, the generator slips a pole and then, after some oscillations, resynchronises at the equilibrium point with zero speed deviation and angle ~=2n .
The stability region estimated by the ANN . is shown in Fig . 5. Obviously this is a more representative region than the rough one shown in Fig . 3.
4. TRANSIENT STABILITY ESTIMATION To evaluate the performance of the proposed method a number of comparisons have been made. The results are summarised in Fig. 6. In this Figure ll.w s1m is the deviation of the initial rotor speed Wo from the synchronous speed in percent of synchronous speed and 1)0 is the angle when the first pole-slip takes place, taken from simulations of the synchronous generator. ll.w pred is the deviation of the initial rotor speed Wo from the synchronous speed in percents of synchronous speed too , for which a pole-slip has been predicted by the proposed method. In both cases the same in-
As mentioned above the number of pole-slips of a synchronous generator during the resynchronisation process depends on the values of rotor speed Wo and angle deviation ~o at the time the circuit breaker recloses. It is therefore important to obtain the region on the two dimensional space w-~ where the generator synchronises without pole slips, so that the generator stability can be predicted in real-time by measuring the values of w and ~ during the fault. To quantify the appearance of pole-slips we define the pole-slip indicator (p.s .i.) as follows :
1091
itial angle deviation ~o is assumed. The error of each prediction is defined as :
6. REFERENCES Concordia C. (1951) Synchronous Machines, Theory and Performance. General Electric Co. Hariharan S. (1976) Asynchronous Operation and Resynchronisation of synchronous machines. Proc. lEE, Vol. 123, No 11 . Hartana R. K. and Richards G. G. (1990)Harmonic source monitoring and identification using neural networks. IEEE Trans. on Power Systems, Vol. 5, No 4. pp 1098-1104. IEEE Committee Report (1963). Computer Representation of Excitation Systems . IEEE Trans . On Power Apparatus and
0 .5
-0.5
Systems. Vol. 87, No 6.
-1 -10
o
-5
5
Kohonen T. (1977) . Associative memory: A system theoretical approach. Springer, New York. Kohonen T. (1984) . Self organisation and associative memory. Springer Verlag, Berlin. Krause P. (1986) Analysis of Electric Machinery. McGraw-
10
Fig. 5. The estimated stability region by the ANN in the wo-
Hill.
150 plane. p.e.
= Llwsim -
Miller T. and Werbos P. (1990) Neural Networks for Control,
Llwpred
MIT Press.
So when the prediction error p.e. is positive the estimation is conservative (pessimistic). Otherwise the estimation is optimistic. As it can been seen in Table 1, when the synchronous generator accelerates, the p.e. is small and negative for most cases i.e. the prediction is optimistic. When the synchronous generator decelerates the p.e. is positive for all cases, i.e. the prediction is conservative. Table 1 Prediction error
no.
Llwsim
~o
Llw pred
p.e .
1 2 3 4 5
1.3 3.2 5.2 6.3 7.2
n 3n/4 n/2 n/4 0
1.4 2.5 5.3 6.5 7.4
-0.1 +0.7 -0.1 -0.2 -0.2
6 7 8 9 10
-3.1 -3.8 -5.3 -6.5 -7.2
n 3n/4 n/2 n/4 0
-2.9 -3.1 -5.0 -6.5 -6.8
+0.2 +0.4 +0.3 +0.0 +0.6
5. CONCLUSIONS A new method of predicting the resynchronisation conditions of a synchronous machine after a fault has been presented. The method is using the new and emerging technology of Artificial Neural Networks. The strength of this method lies in the adaptability of Neural Networks, in their ability to learn-by-example and to improve their performance reducing prediction error by increasing the learning set, and finally to their structure which allows the fast and fault tolerant information processing in parallel. An ANN. with Backpropagation learning algorithm, has been described, implemented and tested on a synchronous generator. The initial investigations indicate that this approach performs successfully the prediction task in real-time. Further efforts are needed to extend the accuracy of the method.
1092
Minsky M. (1958). Some methods of Artificial Intelligence and heuristic Programming, in Mechanisation of thought Processes. Proceedings of a Symposium held at the National Physical Laboratory, Vol. 1. Park D., and EI-Sharkawi, M. (1990) Electric Load Forecasting Using An Artificial Neural Network. IEEE/PES Summer Meeting, Minneapolis, Minnesota. Rosenblatt F. (1962) . Principles of Neurodynamics. Spartan, New York. Rumelhart D. and McClelland J. (1989) Parallel Distributed Processing. Explorations in the Microstructure of Cognition. MIT Press, Vol. 1. Webb A. - Lowe D. (1988) A comparison of nonlinear optimisation strategies for feed-forward adaptive layered networks. Royal Signals and Radar Establishment, Memorandum No 4157. Wu Q. H. and Hogg B. W. (1988), Adaptive controller for a turbogenerator system, lEE Proc., Vol. 135, pt. D, No. 1, pp. 35-42.